Download as pdf or txt
Download as pdf or txt
You are on page 1of 398

Lecture Notes in

Engineering
Edited by C. A. Brebbia and S. A. Orszag

i'g~
~.
IFIP 61
.--------------------------------....
A. Der Kiureghian,
P. Thoft-Christensen (Eds.)

Reliability and Optimization


of Structural Systems '90
Proceedings of the 3rd IFIP WG 7.5 Conference
Berkeley, California, USA, March 26-28,1990

"

I'.:
-
Springer-Verlag
Berlin Heidelberg New York London
. Paris Tokyo Hong Kong Barcelona
Series Editors
C. A. Brebbia . S. A. Orszag

Consulting Editors
J. Argyris . K-J. Bathe· A. S. Cakmak . J. Connor· R. McCrory
C. S. Desai . K. -Po Holz . F. A. Leckie . G. Pinder· A. R. S. Pont
J. H. Seinfeld . P. Silvester· P. Spanos' W. Wunderlich . S. Yip

Editors
A. Der Kiureghian
University of California
Dept. of Civil Engineering
721B Davis Hall
Berkeley, California 94720
USA

P. Thoft-Christensen
The University of Aalborg
Institute of Building Technology
and Structural Engineering
Sohngaardsholmsvej 57
9000 Aalborg
Denmark

ISBN-13:978-3-540-53450-1 e-ISBN-13:978-3-642-84362-4
001: 10.1007/978-3-642-84362-4

This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, re'use of illustrations, recitation,
broadcasting, reproduction on microfilms or in other ways, and storage in data banks. Duplication
of this publication or parts thereof is only permitted under the provisions of the German Copyright
Law of September 9, 1965, in its version of June 24, 1985, and a copyright fee must always be paid.
Violations fall under the prosecution act of the German Copyright La,w.
© Intemational Federation for Information Processing, Geneva, Switzerland. 1991

The use of registered names, trademarks, etc. in this publication does not imply, even in the
absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.

61/3020-543210 Printed on acid-free paper.


PREFACE

This proceedings volume contains 33 papers presented at the 3rd Working Conference on "Rel-
iability and Optimization of Structural Systems", held at the University of California, Berkeley,
California, USA, March 26 - 28, 1990. The Working Conference was organised by the IFIP (Inter-
national Federation for Information Processing) Working Group 7.5 of Technical Committee 7 and
was the third in a series, following similar conferences held at the University of Aalborg, Denmark,
May 1987 and at the Imperial College, London, UK, September 1988. The Working Conference
was attended by 48 participants from 12 countries.
The objectives of Working Group 7.5 are:
• to promote modern structural systems optimization and reliability theory,
• to advance international cooperation in the field of structural system optimization and reliability
theory,
• to stimulate research, development and application of structural system optimization and reli-
ability theory,
• to further the dissemination and exchange of information on reliability and optimization of
structural systems
• to encourage education in structural system optimization and reliability theory.
At present the members of the Working Group are:

A. H.-S. Ang, U.S.A. G. A ugusti, Italy


M. J. Baker, United Kingdom P. Bjerager, Norway
C. A. Cornell, U.S.A. R. B. Corotis, U.S.A.
M. Grigoriu, U.S.A. A. Der Kiureghian, U.S.A.
O. Ditlevsen, Denmark D. M. Frangopol, U.S.A.
H. Furuta, Japan S. Garribba, Italy
M. R. Gorman, U.S.A. M. Grimmelt, Germany, F. R.
N. C. Lind, Canada H. O. Madsen, Denmark
R. E. Melchers, Australia F. Moses, U.S.A.
Y. Murotsu, Japan A. S. Nowak, U.S.A.
R. Rackwitz, Germany, F. R. C. G. Soares, Portugal
J. D. Slirensen, Denmark P. Thoft-Christensen (chairman), Denmark
Y. - K. Wen, U.S.A.

Members of the Organizing Committee were:


M. J. Baker, UK
P. Bjerager, Norway
D. M. Frangopol, U.S.A.
A. Der Kiureghian (co-chairman), U.S.A.
P. Thoft-Christensen (co-chairman), Denmark
IV

The Working Conference received financial supported from IFIP, University of California at Ber-
keley, and University of Aalborg.
On behalf of WG 7.5 and TC-7 the co-chairmen of the Conference would like to express their
sincere thanks to the sponsors, to the members of the Organizing Committee for their valuable
assistance, and to the authors for their contributions to these proceedings. Special thanks are due
to Mrs. Kirsten Aakj~r, University of Aalborg, for her efficient work as conference secretary, and
to Ms. Gloria Partee, University of California at Berkeley, for her valuable assistance in carrying
out local organizational matters.

June 1990

A. Der Kiureghian P. Thoft-Christensen


CONTENTS

Short Presentations
A New Beta-Point Algorithm for Large Time-Invariant and Time-Variant Reliability
ProbleIllS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T. Abdo, R. Rackwitz
Rigid-Ideal Plastic System Reliability . . . . . . . . . . . . . . . . . . . . . . . 13
Torben Arnbjerg-Nielsen, Ove Ditlevsen
Optimal Allocation of Available Resources to Improve the Reliability of Building Systems 23
Giuliano Augusti, Antonio Borri, Emanuela Speranzini
Life Expectancy Assessment of Structural Systems . . . . 33
Bilal M. Ayyub, Gregory J. White, Thomas F. Bell-Wright
Parameter Sensitivity of Failure Probabilities. 43
Karl Breitung
Expectation Ratio Versus Probability . . . . 53
F. Vasco Costa
Application of Probabilistic Structural Modelling to Elastoplastic and Transient Analysis 63
T. A. Cruse, H. R. Millwater, S. V. Harren, J. B. Dias
Reliability-Based Shape Optimization Using Stochastic Finite Element Methods 75
Ib Enevoldsen, J. D. S{lJrensen, G. Sigurdsson
Calibration of Seismic Reliability Models. . . . . . . . . . . . . . . . . . 89
Luis Esteva
Computational Experience with Vector Optimization Techniques for Structural Systems 99
Dan M. Frangopol, Marek Klisinski
Management of Structural System Reliability. . . . . . . . . 113
Gongkang FU, Liu Yingwei, Fred Moses
Reliability Analysis of Existing Structures for Earthquake Loads 129
Hitoshi Furuta, Masata Sugito, Shin-ya Yamamoto, Naruhito Shiraishi
Sensitivity Analysis of Structures by Virtual Distortion Method. . . . 139
J. T. Gierlinski, J. Holnicki-Szulc, J. D. S{lJrensen
Reliability of Daniels Systems with Local Load Sharing Subject to Random Time
Dependent Inputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Mircea Grigoriu
Reliability Analysis of Elasto-Plastic Dynamic Problems. . . . . . . . . . 161
Toshiaki Hisada, Hirohisa Noguchi, Osamu Murayama, Armen Der Kiureghian
Identification of Autoregressive Process Model by the Extended Kalman Filter 173
Masaru Hoshiya, Osamu Maruyama
VI

The Effect of a Non-Linear Wave Force Model on the Reliability of a Jack-Up Platform 185
J. Juncher Jensen, Henrik O. Madsen, P. Terndrup Pedersen
Optimum Cable Tension Adjustment Using Fuzzy Regression Analysis . 197
Masakatsu Kaneyoshi, Hiroshi Tanaka, Masahiro Kamei, Hitoshi Furuta
Bayesian Analysis of Model Uncertainty in Structural Reliability . . . 211
Armen Der Kiureghian
Size Effect of Random Field Elements on Finite-Element Reliability Methods. 223
Pei-Ling Liu
Reliability-Based Optimization Using SFEM . . . . . . . . . 241
Sankaran Mahadevan, Achintya Haldar
Classification and Analysis of Uncertainty in Structural Systems 251
William Manners
Directional Simulation for Time-Dependent Reliability Problems 261
Robert E. Melchers
Some Studies on Automatic Generation of Structural Failure Modes. 273
Yoshisada Murotsu, Shaowen Shao, Ser Tong Quek
Sensitivity Analysis for Composite Steel Girder Bridges 287
Andrzej S. Nowak, Sami W. Tabsh
Long-Term Reliability of a Jackup-Platform Foundation 303
Knut O. Ronold
Constant Versus Time Dependent Seismic Design Coefficients. 315
Emilio Rosenblueth, Jose Manuel Jara
Reliability of Structural Systems with Regard to Permanent Displacements. 329
J. D.S~rensen, P. Thoft-Christensen

Reliability of Current Steel Building Designs for Seismic Loads 339


Y. K. Wen, D. A. Foutch, D. Eliopoulos, C.- Y. Yu
Jackup Structures: Nonlinear Forces and Dynamic Response . 349
Steven R. Winterstein, Robert L~seth
Stochastic Programs for Identifying Significant Collapse Modes in Structural Systems 359
James J. Zimmermann, Ross B. Corotis, J. Hugh Ellis

Lectures
Critical Configurations of Systems Subjected to Wide-Band Excitation 369
Takeru Igusa
On Reliability-Based Optimal Design of Structures 387
P. Thoft-Christensen
Index of authors 403
Subject index 405
A NEW BETA-POINT ALGORITHM FOR LARGE TIME-INVARIANT
AND TIME-VARIANT RELIABILITY PROBLEMS

T. Abdo & R. Rackwitz


Technical University of Munich, F.R.G.

Introduction

Several methods exist to compute the probability integrals occurring in structural reliability. These
integrals have the general form

I(V) = f
v
h(x) f(x) dx (1)

where V is the integration domain. X the vector of uncertain basic variables. h(x) a certain smooth
function and f(x) the (continuous) probability density of X. In the general case V is given by
V = n~ Vi with Vi = {gi(X) ~ O} as individual failure domains. The state functions gi(X) are
assumed to be (locally) twice differentiable. This integral includes as special cases simple probability
integrals. integrals for the mean number of excursions in random process or random field theory and
certain expectations. In many cases there is h(x) = 1 and m = 1. Because numerical integration fails
to be practicable except in very low dimensions of X approximate methods have been developed the
most important ones are based on the theory of asymptotic Laplace integrals (Breitung. 1984;
Hohenbichler et al.. 1987). According to that theory a critical point x' in V needs to be located where
the integrand function f(x) becomes maximal and where the boundary of the failure domain can be
expanded into first or second order Taylor series so that integration is analytic. So far computations
are mostly performed in the so-called standard space i.e. the original vector X is transformed by an
appropriate probability distribution transformation into a standard normal vector with independent
components. Proposals have also been put forward which do not need this probability distribution
transformation (Breitung. 1989). It appears. however. that this advantage does only rarely compensate
for certain difficulties which. apparently. must mainly be regarded as problems of scaling in the
numerical analysis. In the following we discuss primarily formulations in the standard space. A similar
discussion for original space formulations will be presented in a separate paper. In any case integration
is reduced to a search for the critical point and some simple algebra. Here. we will be concerned only
with the search for the critical point.

The first important step towards an accurate and efficient failure probability calculation was made by
Hasofer/Lind (1974). Their search algorithm was a simple gradient algorithm. Later it was modified
2

and made globally convergent by Rackwitz/FieBler (1978). The most important modification consisted
in truncating the step length as the number of iterations increased according to Psenicnyi (1970).
Another improvement consisted in adding certain line searches which made the algorithm quite reliable
but also expensive in complicated cases. In the following this algorithm will be denoted by RF-algo-
rithm for brevity of notation. Later investigations by a number of authors have studied almost the full
spectrum of available algorithms in mathematical programing (see especially Liu/Der Kiureghian, 1986,
for a thorough discussion) for their applicability in structural reliability calculations such as gradient
free algorithms, penalty methods, stochastic programing methods but also the so-called sequential
quadratic programming method (SQP-method) which theoretically is now considered as the most
efficient method (see, for example, Gill, et aI., 1981: Hock/Schittkowski, 1983; Arora, 1989). Based on
their studies Liu/Der Kiureghian (1986) proposed a certain merit function for the line searches in the
RF-algorithm and could demonstrate both efficiency and reliability of the impoved algorithm in a
number of examples. This modification will be denoted by LDK-algorithm. The RF-algorithm and the
LDK-algorithm have originally been designed only for single constraint problems but generalizations to
multi-constraint problems have recently been proposed independently by Abdo (1989) and
S¢rensen/Thoft-Christensen (1989). A first general conclusion from a variety of related and in part
unpublished studies is that it is worthwhile to adjust general purpose algorithms to the specific
settings in reliability computations. A second conclusion is that relatively small changes in the
algorithms can improve substantially their behavior. The third conclusion from these studies is that the
gradient-based algorithms in their present form are only slightly less efficient than the SQP-algo-
rithms in not too large dimensions and for well-behaved problems. The RF-algorithm and, to a lesser
degree, the LDK-algorithm are clearly inferior to the latter for highly curved constraint functions under
otherwise similar conditions due to the better convergence behavior (fewer iterations) of the
SQP-algorithms. This is especially true when very expensive structural state functions are involved
such as finite element structural analyses; All other types of search algorithms appear to be serious
competitors to the RF- LDK- or the SQP-algorithms only in very special cases. However, for larger
dimensions of the uncertainty vector, larger than 50, say, all implementations of the SQP-algorithm
available to us became less efficient and less reliable than gradient-based algorithms. This is easily
explained by the fact that those algorithms use numerical updates of certain Hessian matrices which in
part are just the reason for their efficiency. For larger dimensions the update must remain crude when
the number of iterations is significantly smaller than the dimension of the Hessian. Hence, the
theoretical superiority of the SQP-algorithm above gradient-based algorithms turns out to be not
significant in practical applications. But more important, SQP-algorithms failed to converge reliably in
higher dimensions for reasons to be explained later in detail. Also, storage and CPU time requirements
became rather high. It may therefore be asked whether it is possible to overcome the shortcomings of
the SQP-algorithms in higher dimensions while retaining its otherwise favorable properties.

In this paper the SQP-algorithm will briefly be reviewed with special reference to possible problems.
The SQP-algorithm is then specialized for reliability calculations in the standard space. Certain
simplifications and modifications are introduced resulting in a new multi-constraint algorithm. It can
also be used in time-variant problems with little modification. Results of numerical tests are reported.
3

Sequential Quadratic Programming in Nonlinear Constrained Optimization

The general optimization problem can be written in the classical form:

min f( u) (2)

subject to inequality constraints

for j = 1,2 ...... m

The Lagrangian function of the general problem is defined by

L
m
L(u) = f(u) + /\j gj(u) (3)
j =1

where Aj are the so---called Lagrange multipliers. The Kuhn-Tucker optimality conditions for an
optimal point u· are

L Aj Vgj(u') = 0
t

VL(u') = Vf(u') + (4a)


j =1

j = 1,2 .... t (4b)

k = t+1, ... m (5)

t is the number of active inequality constraints and V is the gradient operator. The inclusion of equality
constraints is easily done by considering them always as active constraints. If n is the number of
variables and the active constraints are known an optimal point u· can be found by solving the system
of nonlinear equations (4) for n +t unknowns, namely for n components of the solution point u· and t
Lagrange multipliers A'.

An algorithm based on the solution of this system of equations by a direct method such as the Newton
method is extremely inefficient. It also must be noticed that the set of active constraints in the
solution point is not known in advance. The Sequential Quadratic Programing Method is generally
considered as most efficient for a solution. This method replaces the original problem by a sequence of
quadratic programing problems which are exactly solvable and which approximate the original one.
This is done by approximating the Lagrangian function by its second order Taylor expansion in an
initial point uo .

L(u) = L(u o) + VLoT (u - uo) + ~ (u - uo)T V2Lo (u - uo) (6)

where
4

L Aj gj(u o)
m

l(u o) = f(u o) + (7)


j ; !

L Aj Vgy
m

Vlo = Vfo + (8)


j; 1

L Aj V2gy
m
V2lo = V2fo + (9)
j;J

and V2fo represents the Hessian of the function f in the point uo. For the formulation of the optimality
conditions the constraint functions are approximated by their first order Taylor expansion. The
Kuhn-Tucker conditions are then written for the set of active constraints in the solution point of the
quadratic programing problem. With a so-called active set strategy (see Gill et al. (1981)) it is
possible to determine which constraints are to be included in the formulation in each quadratic
problem of the sequence. The optimality conditions for any iteration point k of the sequence of
quadratic expansions are

L A~ Vg~ + [V2fk + L ,\~ V2 gr] (u - uk)


t t

Vl(u) = Vlk + V2 lk (u - uk) = Vfk + = 0


j;! j;J

(10)
j = 1.2 ... t (11)

In matrix form these equations can be written as

k =
[ Vlk Gk] [LlU ] [-Vfk]
GkT 0 Ak -fk (12)
with
Llu k = (uk+! - uk) (13)
Gk = [Vgf .... Vgr ....... Vgt]nxt (14)
fkT = [g!(uk) .... gj(uk) ....... gt(uk)hxt (15)

The exa~t calculation of the second order derivatives for the Hessian matrix in Eq. (12) is generally to
expensive and can not be efficiently implemented for a general case. Therefore. the gradient
information obtained in each point during iteration is used to build up an approximation of this matrix
using one of the known update formulas (see Gill et al. (1981)). In the first iteration a unit matrix is
used instead of the true Hessian to solve the system of equations Eq. (12). The solution of this
quadratic problem with linear constraints defines a direction (Llu k) in which a line search is performed.
This one dimensional search is performed to obtain an optimal decrease of the objective and the
constraint functions in that direction. The new iteration point is defined by
5

Uk+ 1 = uk + vkLlu k (16)

where 11k is found by minimizing the descent function

m
lI1(u k + vkLlu k) = f(u k + vkLlu k) + ~>\j gj(U k + /,kLlu k) + ~ rr gr(u k + vkLlu k)] (17)
j =1

with

(18)

and rO = 1AO I. This augmented Lagrangian function was proposed by Schittkowski (1981) who proved
global convergence of the algorithm with this definition of the descent function. The process stops
when the optimality conditions of the original problem are satisfied. In general. vk needs to be
determined only approximately. e.g. by quadratic or cubic interpolation.

The most time consuming part in this algorithm is the updating of the Hessian matrix or its triangular
decomposition and the solution of the system of equations. In each iteration 10n 2 + O(n) arithmetic
operations are required. The update formulae determine the exact Hessian of quadratic functions after
n updates. A fair approximation of the Hessian of non-quadratic functions are also obtained with
about n updates of the matrix. This means that the approximation used in the few (say ten) iterations
to reach convergence cannot be very good when the problem has large number of variables. But the
rounding errors during the updating process in large problems can make the approximate Hessian to
become singular. Close to singularity the search direction can be significantly distorted. In this case the
algorithm has to restart the iteration with a unit Hessian matrix in the point where singularity
occurred.

The New Algorithm

It is possible to modify the algorithm for reliability applications. As mentioned the objective function is
a simple quadratic function which can be exactly expressed by its Taylor expansion in any point uk;

f( u) = II U 112 = f( uk) + VfkT Llu + ~ Llu T V2fk Llu = II ukll 2 + 2 ukT Llu + Llu T Llu (19)

This expansion and a first order Taylor approximation of the constraints is used to write the
Lagrangian function as
6
m

L(u,>.) = Ilu kl1 2 + 2 ukT ~u + ~uT ~u+ L, Aj {gj(u k) + Vgr T ~u} (20)
j =!

For the optimality conditions the constraints are also approximated by a linear expansion and one
obtains the following Kuhn-Tucker conditions:
t

VL(u,>.) = 2 uk + 2 ~u + L,/\j Vgy = 0 (21a)


j =!
gj(u) = gj(u k) + Vgr T ~u = 0 j=1,2, .. t (21b)

where t is the number of active constraints. This system of equations can be written in matrix form
using the definitions in Eq. (14) and (15).

2I Gk] [~Uk] [-2 Uk]


[GkT =
0 >.k -fk (22)

With this formulation 3 constant approximation of the true Hessian matrix is obtained. Only the
contribution of the objective function to the Hessian is considered. The solution of this system is
obtained by a Gaussian decomposition of the matrix in Eq. (22).

>.k = 2 (GI Gk)-! (f k - GI uk) (23)

I1k+! = Gk (GI Gk)-! (GI uk - f k) (24)

This closed-form solution of the system of equations needs only the numerical decomposition of a
small matrix GI Gk with dimension t which contains the scalar product of the gradients of the
constraints as elements in each iteration. The result can be written in a simpler way by noting that the
gradient matrix can be given as

(25)
with

Ak = [at,···ar,····.·atl (26)

ak =
J
mb-rr VgkJ
II,gJII
(27)

Nk = a diagonal matrix with the values of the norm IIVgr l1

With this notation the solution for the new point is written as

(28)
7

where

(29)

can be interpreted as the correlation matrix of the constraints in the point k. This matrix is always
positive definite. The use of the diagonal unit matrix as the Hessian matrix of the lagrangian is
justified because of the special form of the objective function. The mathematical proofs of global
convergence of the algorithm are also valid for a unit matrix when the augmented lagrangian function
is used as a descent function (HockjSchittkowski (1983». The new iteration point is then found by
calculating the optimal step length tlk with the descent function in Eq. (17).

(30)

In the case of one constraint eq. (28) simplifies to

IIk+l = ~ [ ukT Vgk - g( uk) ] Vgk (31)

This is the RF-algorithm in its original form but here it is combined with the step length criterion by
HockjSchittkowski (1983). This new algorithm can be shown to be superior to the RF-algorithm in
any dimension and superior to SQP-algorithms for larger dimensions. Due to the specific choice of the
descent function it is also slightly more reliable than the lDK-algorithm but otherwise similar. For
convenience. it will be called ARF-algorithm in the following. The algorithm further allows an
important generalization for parameter-dependent reliability problems. Parameters occur. for example.
in reliability problems involving non-homogeneous random processes. fields or state functions. Here we
consider only the single constraint case. The extension to the multi--constraint case is straightforward.
Assume that the state function is an explicit or implicit function of a deterministic variable vector T
which. for example. represents time and/or space coordinates. The optimization problem is formally
rewritten as

min feu) = min II u 112 (32)

under the constraint

g(u) ~ 0 (33)

with

u= [¥] (34)

The lagrangian function of the problem


8

L(ii.A) == IIu kl1 2 + 2 ukT ~u + ~uT ~u+ A {g(u k) + VgkT ~ii} + ~ ~TT ~T (35)

and the optimality conditions

L(ii.A) == 2 uk + 2 ~uk + ,\ Vg" + ~Tk == 0 (36a)


g(ii) == g(ii) + Vg"T ~iik =0 (36b)

are written using a first order expansion of the constraint and by approximating the contribution of the
second derivatives of the constraint with respect to the vector T (including the Lagrange multiplier) by
a unit matrix of the same dimension (nd of the vector T. The system of equations in matrix form then
is

21 Vgk -2 uk
[ o
VgkT 1 0 -g(ii) (37)

where

Vg kT == [~uk .......gg(u k).'gg~uk) ......ggtUk)] -_ [Vug kT .Vtg kT] (38)


Ut Un 1 nt

The solution of this system is

~ Uk -- -u k + IIVugkl1 2 +1 21IVtgkll 2 [ ukT Vug k - g(-k)


U
] Vug k (39)

ATk -- IIVug k l1 2 +1 211V kl12 [ ukT Vug k - g(-uk) ] Vtg k (40)


il tg

Discussion and Comparisons

In a la~ge number of examples of varying problem dimension and partly large curvatures the earlier
findings have been confirmed i.e. (see Abdo (1989)):

- The RF-algorithm performs well in any dimension when the curvatures of the constraints are
small. The number of iterations then is only slightly larger than for the other. theoretically
superior algorithms. The storage and CPU time requirements are small. However. when the
curvatures become larger the algorithm can require a large number of iterations.
9

In smaller dimensions but with more curved constraint functions, SQP-algorithms (here we used
the algorithm NLPQL by Schittkowski (1984) for numerical comparisons) required the least
number of iterations but more and more storage and CPU time with increasing problem
dimension. For highly curved constraint functions SQP-algorithms are also the most reliable in
small and moderate dimensions. The mentioned algorithm, however, failed to reach convergence
at a problem dimension of around n = 50 and larger due to singularity of the updated Hessians
and continues to do so even with carefully selected starting vectors. Limited experience with
other SQP-implementations suggests that this is a general feature of SQP-algorithms. The
authors believe that, in principle, the design of a device should be possible which avoids this
behavior. It would most likely be associated with some loss of efficiency.

The new ARF-algorithm was found to work reliably in any dimension and almost as efficient as
the pure SQP-algorithms from which it uses essential elements. It appears that a suitable step
length algorithm is most important for efficiency and reliability. Especially in problems with
larger curvatures the new algorithm can be by orders of magnitude more efficient than the
original RF-algorithm. The storage requirements and CPU time are much smaller than with the
SQP-algorithms and by far and large the same as for the RF-algorithm. Numerical comparisons
of the RF-algorithm with the LDK-algorithm showed that only if the parameter rk is suitably
chosen reliability and efficiency are comparable for the two algorithms. Hence it is essentially
eq. (18) which makes the new algorithm more stable and efficient under all circumstances. For
the special state function

L
n
g( u) = 1.5 exp[0.10 Ui] - n exp[0.10 Ul] (41)

figure 1 shows the required storage in appropriate units versus the number of variables in
eq. (41). The dramatic increase in storage of the NLPQL-algorithm with problem dimension
limits its use. For example, under the DOS-operating system up to 85 variables can be handled
while the ARF-algorithm can handle problems up to 2000 variables. In figure 2 the time per
iteration is plotte~ versus the number of variables in eq. (41) for the two algorithms indicating
that the CPU time is comparable only for very small dimensions but increases rapidly when the
number of variables is larger than about 20. In these comparisons one has to keep in mind that
the state function used in the example is very simple. A SQP-algorithm may still be preferable in
lower dimensions when calls of the state function and its derivatives require much computing
time. Finally, it is worth mentioning that the length of the program code of the new algorithm is
less than half of most of the SQP-algorithms.

These findings, of course, refer only to the pre-defined application to reliability calculations in the
standard space. Furthermore, it is important to keep in mind that in actual calculations the proper
scaling of function values and the particular schemes for taking the derivatives of the constraints are of
vital importance. Also, the numerical constants for the convergence criteria and the other
10

numerical operations must be consistent and adjustable to the specific problem at hand. A
theoretically optimal algorithm can behave much worse than a simple gradient algorithm if
implemented inadequately in this respect.

From some preliminary studies one can conclude that efficient and reliable original space algorithms
should be designed on similar lines because pure SQP-algorithms show the same drawbacks in higher
dimensions as in the standard space. In this case the objective function simply is the joint density of X
in Eq. (1) or better In fx(x). Again. the numerical update of the Hessian of the Lagrangian must be
avoided. With some additional computational effort one can compute analytically the Hessian of the
objective function for most stochastic models and it appears that with only little loss of efficiency one
can concentrate on the diagonal. This then leads to a scheme very similar to the ARF-algorithm.
However. as mentioned before some more work is needed to reach final conclusions.

Summary

The search algorithms for the critical point for standard normal probability integrals based only on
gradients and simple step length strategies are theoretically and practically inferior to sequential
quadratic programing algorithms in general applications. The latter. however. show instabilities in
higher problem dimensions where they also can consume much CPU time and storage. By certain
modifications it is possible to construct a new algorithm which has almost the same convergence
properties as SQP-algorithms in smaller dimensions but does not share its shortcomings in large
dimensional problems. A new algorithm is proposed which essentially is the RF-algorithm but with an
improved step length procedure. It has been generalized to multiple constraints and to include
optimization parameters not explicitly occurring in the objective function thus facilitating the
numerical solution of time-variant reliability problems.

References:

Abdo Sarras. T.. (1989). Zur Zuverlassigkeitsberechnung von statisch beanspruchten


T ragwerken mit unsicheren Systemeigenschaften. Dissertation. Technische Universitat
Miinchen

Arora. J. 5 .. (1989). Introduction to Optimum Design. McGraw-Hili. New York

Breitung. K .. (1984). Asymptotic Approximations for Multinormal Integrals. Journal of


Enginnering Afechanics. ASCE. Vol. 110. No.3. 357-366
Breitung. K .. (1989). Asymptotic Approximations for Probability Integrals. Technical
Report. Institut fur Statistik und Wissenschaftstheorie. Munchen
11

FieBler. B.. Neumann. H.-J .. and Rackwitz. R.. (1979). Quadratic Limit States in Structural
Reliability. Journal of Enginnering Mechanics. ASCE. Vol. 105. EM4. 661--676.

Gill. P.E .. Murray. W .• Wright. M.H .. (1981). Practical Optimization. Academic Press.
London.

Hasofer. A.M .. and Lind. N.C.. (1974). An Exact and Invariant First Order Reliability
Format. Journal of Enginnerillg Mechanics. ASCE. Vol. 100. No. EM1. 111-121.

Hock. W .• Schittkowski. K .. (1983). A comparative performance evaluation of 27 nonlinear


programming codes. Computing. Vol. 30. 335-358.

Hohenbichler. M .. Gollwitzer. S.. Kruse. W .. and Rackwitz. R.. (1987). New Light on First-
and Second-Order Reliability Methods. Structural Safety. Vol. 4. pp. 267-284.

Liu. P.-L.. Der Kiureghian. A .. Optimization Algorithms for Structural Reliability Analysis.
Rep. UCB/SESM-86/09. Dept. Civ. Eng .. Univ. of California. Berkeley. 1986

Psenicnyi. B.N .. Algorithms for General Mathematical Programming Problems. Cybernetics.


Vol. 6. 5. 1970. pp. 120-125

Rackwitz. R.. Fiessler. B .. Structural Reliability under Combined Random Load Sequences.
Computers E:1 Structures. Vol. 9. 1978. pp. 484-494
Schittkowski. K .. (1981). The Nonlinear Programming Method of Wilson. Han and Powell
with an Augmented Lagrangian Type Line Search Function. Numerische Mathematik. Vol.
38. Springer-Verlag. 83-127.

Schittkowski. K .• (1983). On the convergence of a sequential quadratic programming


method with an augmented Lagrangian type line search function. Mathematische
Operationsforschung und Statistik. Vol. 14. 197-216.
Schittkowski. K .. (1984). User's guide for the nonlinear programming code NLPQL. Institut
fur Informatik. Universitat Stuttgart.

S4>rensen. J. D .. Thoft-Christensen. P.. Reliability-based Optimization of Parallel Systems.


Proc. 14th. IFIP TC-7 Conf. on System Modelling and Optimization. Leipzig. GRD. July
3-7. 1989. to be published
12

o
~
N

§
N

g
o
'" NLPOL

§
o

§~
: I"""
o i' ii""
20
I"""""",,:
40 60
""~~~:, 80
II """" "';"" '"'" "";11 ,,: "'"
100 120 140 160

dimension

Figure 1. Storage Space vs. Dimension

..,
0

'"
N

0
N
U
<V
~
c: NLPOL
.g '"
:!
:~
a; 0
0-
<V
E
''-:;

'"

o 20 40 60 80 100 1;'>0 140 160

dimension

Figure 2. Solution Time vs. Dimension


RIGID-IDEAL PLASTIC SYSTEM RELIABILITY

Torben Arnbjerg-Nielsen & Ove Ditlevsen


Deptartment of Structural Engineering
Technical University of Denmark, DK-2800 Lyngby, Denmark

Abstract A method for upper bound approximations to the reliability with respect to rigid-ideal
plastic structural collapse is presented. Usually only very few collapse mechanisms contribute signifi-
cantly to the total probability of collapse. The problem is therefore to identify these few mechanisms.
The method is illustrated for a spatial frame structure discretized into a finite number of potential
yield hinges. Each potential yield hinge is modeled individually by assigning to it any general
piecewice differentiable yield surface. The associated flow rule is assumed to be valid in all hinges.
Yield strengths and loads are random without restrictions on the choice of the joint distribution
except for operational transformability into the a standard Gaussian space. It is shown how the
problem of search for significant collapse mechanisms can be formulated as a standard constrained
non-linear minimization problem.

Introduction Reliability calculation for rigid-ideal plastic structural systems has


been studied in numerous works [IJ, [2], [4], [5J, [6], [7], [12], [13J. The main reason
for this interest is the consistent and practicable analyzing tool provided by the
lower and upper bound theorem of plasticity theory. This means that only the final
state of the structure (failure or safe), and not the .Drdered sequence of element
failures, is of interest.
In reliability evaluation the upper bound theorem of plasticity has received
most attention, [1], [4J, [6J, [7J, [12J, [13J. For non-trivial structures the number
of plastic mechanisms are infinite, meaning that an exact computation of the
reliability is unattainable. However, typically only very few mechanisms contribute
significantly to the probability of collapse, and on the basis of these mechanisms
a close upper bound may be obtained (series system).
In this paper a general formulation for an upper bound approximation to the re-
liability with respect to rigid-plastic collapse is set up, based on the upper bound
theorem of plasticity. The upper bound approximation is based on a set of lo-
cally most likely failure modes found by use of standard non-linear minimization.
The formulation includes interaction of the internal forces. Moreover, the yield
strengths and loads are random.
The basic theorems involved in the approach are stated in [1], [4J, and [6J, but
the way of addressing the problem here is new.

Lower bound on reliability The stresses in an ideal-plastic analysis can be


found from the equations of equilibrium. Normally the number of internal forces
is greater than the number of equations causing the system to be statically inde-
terminate.
14

Using the standard Gaussian elimination technique with complete pivoting on


the equilibrium equations, it is possible to determine a statically determinate struc-
ture with an associated vector of redundants z. In this way the vector of internal
forces Q is obtained as a linear function of the vector of redundants z and the
vector of external loads R, i.e. Q(R, z).
The structure is discretized into a finite number, r, of yield points (or elements).
A yield condition is associated to each point, and the yield function is defined such
that the set of no yielding

{Qilfi(Qi, Y i) > O}, i = 1, ... , r

is convex and nonempty with probability one. Qi is the vector of internal forces
and Y i the vector of strength parameters of the ith yield condition with yield func-
tion k For each yield function the associated flow rule is assumed to be valid.

Lower bound theorem of plasticity theory:


The structure is not in a state of plastic collapse if
and only if the redundants z can be chosen such that
fi(Qi,Yi ) ~ 0, i= 1, ... ,r.

Thus the reliability of the structure may be written

1 - PI = P(UZ E3?n[Q(R, z) E SD

where PI is the failure probability, n = dim(z), and Q(R, z) E S is the event that
f;( Q;, Y i ) ~ 0 for all i = 1, ... ,r. In general the evaluation of this probability is
not practicable. Instead the lower bound

1 - PI ~ P([Q(R, z) E SD

has been considered for some suitable fixed value of z. However, in general this
lower bound is not very close, [4], [7], and in the following example no effort is
made to evaluate it.

Upper bound on reliability Consider the ith yield condition. Let 0i be an


admissible strain rate vector adjoined to the internal force Qi' that is a strain rate
vector which is consistent with the associated flow rule. For a given yield strength,
Yj, the plastic dissipation (or internal work) in the yield hinge/element is uniquely
15

defined by ai. The external work corresponding to ai can be written as a scalar


product between the internal force Qj and aj. This leads to a linearly associated
lower bound safety margin, [6],

Mi(Qj, Yj,aj) = D(Yj,aj) - < Qi,aj > (1)

to the safety margin !i( Qj, Y i ). D(Yj, aj) is the dissipation, and < .,. > is the
scalar product. It is noticed that for any internal force qi outside the yield surface
!i(qi, Yi) = 0 there is an ai such that Mj(qj, y;, aj) < O. From the upper bound
theorem of plasticity theory an upper bound on the reliability can be obtained by
use of (1).

Upper bound theorem of plasticity theory:


The structure is in a state of plastic collapse if there
exists a kinematically admissible velocity field such that
the corresponding total plastic dissipation is less than
the rate of external work.

For a given piecewice continuous velocity field, represented by the strain rate
vectors ai, i = 1, ... , r, the principle of virtual work can be written

(2)

where Qi' i = 1, ... , r are the internal forces in equilibrium with the external loads
R. The vector 6 is the generalized velocity vector associated to R. The geometrical
compatibility defines 6 so that < R, 6 > represents the work rate of R due to the
imposed velocity field. It is seen from (2) that the left side is independent of
the vector of redundants z. This means that any set of admissible strain rate
vectors ai, i = 1, ... , r, which makes Ei=l < Qi(R, z), ai > independent of z is
kinematically admissible (i.e. it fulfils the boundary conditions).
A linear combination of linearly associated lower bound safety margins with
admissible strain rate vectors reads

(3)

If the strain rate vectors also satisfy the geometrical compatibility, yielding that
E 1=1 < Qi(R, z), ai > is independent of z, the strain rate vectors represent a
kinematically admissible velocity field. It is seen from the upper bound theorem
of plasticity that if (3), based on admissible strain rate vectors, are independent
of z, it is an upper bound safety margin. These results are summarized in the
16

following theorem (see also [6]).

The probability that a sum of linearly associated lower


bound safety margins is positive

P(L~=IMj(Qj, Y j , aj) > 0)


is an upper bound on the probability of no collapse, given
that the sum is independent of z.

It is noticed that any plastic mechanism corresponds to an upper bound safety


margin and vice versa.
The upper bound on the reliability can also be found from the 'only if' part of
the lower bound theorem of plasticity. If the sum (3) is negative for all internal
Qj(R, z), i = 1, ... , r in equilibrium with the external forces R at least one yield
condition is violated. Since the sum (3) is linear in z (Qj, i = 1, ... , r is linear
in z), the sum must be independent of z if it is negative for all internal forces
in equilibrium with R. This means that if the sum (3) is independent of z and
negative the structure is in a state of collapse.

Search for upper bound safety margins of low reliability The search for an
upper bound on the reliability is based on non-linear optimization. It is assumed
that the random variables, i.e. the external loads and the yield strengths, has
transformability into standard Gaussian variables, and that aj E Rm;, i = 1, ... , r,
where mj = dim(aj), are admissible strain rate vectors. The probability that an
upper bound safety margin is negative is computed by the FORM approximation
which for a non-convex limit state might result in an underestimation of the reli-
ability. This leads to the following minimization problem.

Determine ai, ... ,ar such that

(3 = min(lulg(u,a)=o)
obtains a local minimum value given that XT =
(Yf, ... , Y;, RTl is transformed to the standard and
independent Gaussian variables U, and g(U, a) =
L:i=IMj(Qj, Y j , aj) is independent of z, where aT =
(af, ... , a';:)T

This is seen to be a constrained minimization of a scalar function in which the


function evaluation is the standard constrained minimization of the geometrical
17

reliability index. This problem can be solved by a number of standard methods,


[8], [11]. However, in order to speed up the minimization the constraints with
respect to z can be eliminated, an elimination which also reduces the number of
optimization parameters. By looking at the upper bound (3) it is seen that the
constraints form a set of linear equations, giving that up to n of the strain rate
components can be expressed linearly in terms of the remaining components. This
eliminates z from the sum so that it becomes an upper bound safety margin.
The unconstrained minimization in the reduced strain rate space can be made
by a number of standard methods. Since the most efficient solution methods
includes evaluation of the 2nd derivative, these methods require an implementa-
tion of gradients of the function to give faster and more reliable methods. The
method used to optimize the strain rate vectors in the following example is "Pow-
ells method", [3], [10], whereas the scalar function evaluation is maded by use of
the "Schittkowski method", [11]. The gradient of the scalar function with respect
to the jth component in O:i can be found using that, [9],

o{3 = ~(g(u*,o:))
OCiij 1\7g(u*, 0:)1
where u* is the solution point to min(lulg(u,o:)=o), and \7g(u*, 0:) is the gradient
of g( u, 0:) with respect to u evaluated in u*.
In general, there will be a number of solutions to the minimization problem,
solutions representing different failure modes. The different modes are obtained by
starting the minimization in different 0:1, ... , O:r points (or, since there is no scaling
in the strain rate space, in different directions). Due to the correlation among the
mechanisms, more than the global minimum (minima) is usually needed in order
to obtain a close upper bound. However, there is no guarantee that the global
minimum (minima) is found.
As described above the strain rate vectors in an upper bound safety margin
is expressed in terms of q = (E mi) - n free parameters. This means that the
strain rate space has been parameterized in the same way as in the fundamental
mechanisms approach, [13].
Using a different approach the same type of problem has been studied in [2].
The reliability with respect to plastic collapse is expressed as

1 - PI = P(g(R, Y) > 0)

where

g(R, Y) = maxZElR" {mini=dfi(Qi(R, z), Y i )]} (4)


18

based on the lower bound theorem. After a transformation of R and Y into the
standard and independent Gaussian variables U, a transformation which trans-
form (4) into the limit state gu(u) = 0, a FORM approximation is used to identify
one or several likely failure points

minlul for gu(u) =0


and these are used in a series system approximation to the reliability. This means
that if the limit state gu( u) = 0 is convex this approximation is an upper bound
to the reliability. Thus it is seen that the choice of method should be based on a
study of the shape properties of the surface gu( u) = 0 compared with the shape
properties of the surface g( u, a) = 0, and on the dimension of the reduced strain
rate space compared with the degree of redundancy.

R2
~R6
--+ 4m
/' ,/
Rs X2 Rl,
X,

Sm

l,.
>\
Sm

Fig. 1. Spatial frame.

Example An n = 24 times redundant spatial frame is shown in fig. 1. The struc-


ture is subjected to 4 horizontal forces Rl, R2 , Rt, and Rs, and 2 vertical forces
R3 and Rt;. The eight nodal points are the potential points of yielding. The yield
criteria assigned to the nodal point i is taken as

!i(q;,y;) = 1- i~ [( Nf +
nb; NJ .)2 (Mil
MJ.)2 ( MJ.)2]
+ Mil = 0 (5)

where the summation j = 1, ... , nb; is over the number of beams in the nodal
point. Ni is the normal force, Mt and Ml are the bending moments of the
j'th beam in the i'th node with reference to the local coordinate system of the
19

j'th beam. Nf, Mtf' and MIf are the corresponding yield strengths. The local
righthand orientated coordinate systems, with coordinate ordering x, y, and z,
have the x-axes in the beam directions. For the vertical beams the y-axes are in
the x3-axis direction, whereas the directions for the horizontal beams are in the
x2-axis direction.
The yield strengths are modelled by 6 variables for each beam, where the beams
are reaching from potential yield hinge to potential yield hinge. The yield strength
variables Y are lognormally distributed, all with a coefficient of variation of Vy =
0.15. The means are taken as E(Nf ) = 20 MN and E(Myf) = E(Mzf) = 0.5 MNm
for all beams. The 6 yield strength variables assigned to a beam are equicorrelated
with correlation coefficient 0.9, whereas the yield strengths for different beams are
mutually independent.
The external loads, R, are normally distributed and independent of the yield
strengths. The mean values are

E(Rf = (0.12,0.12,10,0.25,0.1,10) MN

with the coefficient of variation

vk = (0.625,0.625,0.05,0.5,0.5,0.05)
The coefficients of correlation are zero except for P(Rb R 2 ) = P(R4' R 5 ) = 1. By
this the dimension of the u-space is 52.
The dimension of the strain rate space is 48, but in order to assure that the
strain rate vectors are kinematically admissible, the strain rate space is described
by 24 parameters.
By starting the minimization in 40 simulated strain rate points 10 different
mechanisms are identified with an average solution time of 88 sec. on an Apollo
10000. The identified mechanisms have the geometrical reliability indices

(31 = 2.896 (32= 2.941 (33 = 2.979 (34 = 4.478 (35 = 4.455
(36 = 4.469 (37 = 4.478 (38 = 4.502 (39 = 4.556 (310 = 4.628

of which only the three first contribute significantly to the collapse probability.
On the basis of the identified 10 mechanisms the reliability for the system can
approximately be bounded by

2.77 < (3 < 2.81


20

A draft of the three significant mechanisms is shown in fig.2.

B
,----,
,
I

--
,
I
--
I

{3 A B C D
2.896 21 3.7 21 1.0
2.941 14 3.2 26 1.0
2.979 8.7 2.9 27 1.0

Fig. 2. Draft of significant mechanisms.

Conclusion A numerical method for a close upper bound approximation to the


reliability with respect to rigid-ideal plastic structural collapse is suggested. The
method is operational for any general piecewice differentiable yield function, and
no restrictions are made with respect to the choice of distributional assumptions
(except that a transformation into the u-space must be possible).
The method is based on non-linear minimization which by nature is time con-
suming. However, having an efficient computer available, the reliability evaluation
of complex ideal plastic structures is not out of reach.

References
[1] Bjerager,P., "Reliability Analysis of Structural Systems", Department of Structural Engineering,
Technical University of Denmark, R183, 1984.
[2] Bjerager,P., "Plastic systems reliability by LP and FORM", Computers and Structures, vol 31,
no 2,1989.
[3] Dennis,J.E., and Schnabel,R.B., "Numerical Methods for Unconstrained Optimization and Non-
linear Equations", Prentice Hall Series in Computational Mathematics, 1983.
21

[4] Ditlevsen,O., and Bjerager,P., "Reliability of highly redundant plastic structures", Journal of
Engineering Mechanics, ASCE, 110(5), 1984.
[5] Ditlevsen,O., and Bjerager,P., "Plastic Reliability Analysis by Directional Simulation", Journal
of Engineering Mechanics, ASCE, 115(6), 1989.
[6] Ditlevsen,O., "Probabilistic statics of discretized ideal plastic frames", Journal of Engineering
Mechanics, ASCE, 114(12), 1988.
[7] Ditlevsen,O. and Arnbjerg-Nielsen,T., "Reliability analysis of stochastic rigid-ideal plastic wall
by finite elements" , IFIP, London, 1988.
[8] Liu,P.-L., and Kiureghian,A.D., "Optimization algorithms for Structural Reliability", in Pro-
ceedings, Joint ASME/SES Applied Mechanics and Engineering Sciences Conference, pp. 185-
196, Berkeley, CA., June 1988.
[9] Madsen,H.O, Krenk,S., and Lind,N.C., "Methods of Structural Safety", Prentice-Hall, 1986.
[10] Madsen,K. and Tingleff,O., "Robust subroutines for non-linear optimization", Report NI 86-01,
Institute for Nurnrical Analysis, Tech. University of Denmark, 1986.
[11] Schittkowski,K., "NLPQL: A Fortran Subroutine solving Constrained Non-linear Programming
Problems", Annals of Operations Research, 1986.
[12] Sigurdsson,G., "Some aspects of reliability of offshore structures", Paper no. 55, Institute of
Building Technology and Structural Engineering, University of Aalborg, Denmark, 1989.
[13] Thoft-Christensen,P., and Murotsu,Y., "Application of Structural Systems Reliability Theory",
Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1986.
OPTIMAL ALLOCATION OF AVAILABLE RESOURCES
TO IMPROVE THE RELIABILITY OF BUILDING SYSTEMS

Giuliano Augusti*, Antonio Borri** & Emanuela Speranzini***


*Dipartimento di Ingegneria Strutturale e Geotecnica
Universita' di Roma "La Sapienza", Italy
**Dipartimento di Engegneria Civile
Universita' di Firenze, Italy
***Istituto di Energetica, Facolta' di Ingegneria
Universita' di Perugia, Italy

SUMMARY

This paper deals with structural optimization under limited resources. As a first relevant
example, the problem of reduction of seismic risk in a built environment is tackled. Following
other recent researches [1-3], three alternative "types" of retrofitting (upgrading intervention) of
known cost per unit volume are considered; it is also assumed that the "vulnerability" of each
building, the "losses" under an earthquake of given magnitude and their possible reduction
thanks to an upgrading intervention are known. Such "losses" can be either (i) the direct and/or
indirect "economical" costs of construction damage, or (ii) the non-monetary losses related to
the number of "victims", i.e. of persons physically affected by the ruin of a building. Use is made
of "dynamic programming" in order to allocate the total available resources among the buildings,
in such a way that the expected cost reduction is maximized; in these calculations, "economical"
and non-monetary losses are considered separately; the possibility of taking both into account
is also discussed. Numerical examples are presented.

1. INTRODUCTION

It is well known that, when account is taken of random uncertainties and of the consequent
probabilities of failure, the total expected cost of a construction is minimum (and
correspondingly the expected ·utility· is maximum) for a certain set of values of the "design
24

variables". It is a typical feature, confirmed by all numerical investigations so far performed [4].
that from the optimal point the total cost increases (and conversely, the expected utility
decreases) very rapidly for under-designed structures, and rather slowly for over-designed
structures: see Fig.1 [4-5] for a simple example with one design variable (the cross-section of
the columns of a one-storey steel frame). Taking account also of the "intangibles" (Le. non-
monetary costs) that cannot be included in the "economical" optimum, it appears therefore
obvious that a correctly "optimized" design will tend to increase the "initial cost", perhaps even
beyond the "apparent optimal" point.
On the other hand, there may be instances in which the available resources are limited,
and therefore the total initial cost of the constructions has rigid bounds that may keep the
designs below the optimal points. This may happen e.g. when for social reasons it is necessary
to build a certain volume of housing, but a given amount of money has been already allotted to
the building agency, or when a private builder wants to optimize the returns of a fixed quantity of
money.

He/fir. =100; Hn/f{I.=10

;'~",,,,,,,,,,,,,,,,,,j,~,,,,, '''''''''''' ,,1:0. 0.6 ~~~~::~~~~~§~~/8~0:=:20:0:0~0


"1 10
I 1t:~O'2~~
10

/8°;10000
a) 0.0 f-...I.-.....L.r~~~--,...---,-_-J.:::.-...;.:....:~
180 200 220 240 260 280 300
Column sectlo.n (HEAl
b)

Fig.1: Simple example of structural design for maximum expected utility: (a) example frame;
(b) plots of expected utility vs column section, for several values of the relevant parameters [4-5].

An even more relevant and realistic example of such a situation is the problem of risk
reduction for a set of existing buildings, that have an unacceptable probability of failure because
of their age and deterioration or of their insufficient design: typically, this is the problem posed
by "old" housing in areas that have been recognized as seismically active, because - as well
known to the technicians and planners that tackle these problems - the money necessary for an
exhaustive upgrading is seldom available, but still something must be made in order to improve
the conditions of the buildings.
Design optimization under limited resources appears therefore a problem of practical, as
well as academic, interest; however, to these Authors' knowledge, it does not seem to have
been treated previously in structural engineering, before their recent researches [1-3] in which a
procedure for such optimization problems has been developed and applied to the specific case
25

of the reduction of seismic risk of existing buildings. This procedure will be again illustrated and
further developed in the present paper, but these Authors feel that it could be easily extended to
analogous situations.

2. SEISMIC VULNERABILITY AND RISK REDUCTION

Consider a set of existing buildings and assume that their "vulnerability" for a given type of
load is known, i.e. that the (probabilistic) relationship is known between "expected damage" and
"load intensity": in the applications so far developed, only seismic load is considered and the
vulnerability is measured by a number ("vulnerability index"), in turn obtained in surveys in which
the so-called ·second level" form has been used, as described in several references (e.g.[6]). A
relationship between vulnerability index, earthquake intensity and percent "damage" of the
building must be introduced for subsequent developments: in the present researches, the set of
curves in Fig.2 has been assumed as deterministic relationships [7-8]. (It is fair to say soon that
this is one of the weakest
points in the whole procedure,
w because of the great dispersion
Ck:
L:J
~'.
w 0.90 of the data on which the curves
W ..eo of Fig.2 were based, and the
0 2) 1=8.5-9 2 3 4
w ~ 0.70 3) 1=8-8.5 consequent statistical
L:J 4) 1=7.5-8 5 uncertainties which should be
<r 0.60 5) 1=7-7.5
:L
<r somehow' taken into account in
0 0.50
order to obtain more significant
0.040
numerical results. However, it is
0.30
also true that this approximate
0.20 assumption, like the other
0.10 simplified relationships
7S.oa 100.00 125.00 150.00 175.00 200.00 225.00 -250.00 215.00
introduced as first tentatives
where necessary, does not
VULN. INDEX
invalidate the essence of the
Fig.2: Expected percent damage versus vulnerability procedure illustrated here,
index, for several MSK intensities (from D.Benedetti, G.M. because different, more
Benzoru, Int. Conf. on "Reconstruction, Restoration and sophisticated, relationships
Urban Planning of Towns and Regions ...", Skopje, 1985).
could be easily introduced
whenever available).
Once the (expected) damage of a building has been calculated, the "losses' must be
evaluated: such "losses" should include both the direct and/or indirect "economical" costs of
construction damage, and the relevant "intangible" (non-monetary) losses, in particular those
related to the number of "victims', i.e. of persons physically endangered by the ruin of a
building. Denoting by d the percent damage, the approximate relations, shown in Fig.3 have
been used so far [7]:
26

c/Cj v / np
b)
a)

0.8

0.4
0.1

0.2 0.6 a 0.5 0.6,0.7 d

Fig.3: a) Monetary losses c vs. damage d (Ci: initial cost); b) victims v (number of endangered
persons) vs. damage d (np: number of persons present in the building at the time of the earthquake).

Note that, in the numerical examples, the number np of present persons has been
assumed equal to 0.017 per cubic meter: it has been left to further researches the possibility of
distinguishing between buildings of different uses. Making use of the relationships in Figs. 2 and
3, the damages and the losses produced in the relevant buildings by the occurrence of an
earthquake of given local intenSity can be calculated. This "expected earthquake" should be
known in probabilistic terms: in the examples that follow, either the intensity of the quake or its
probability mass function (p.m.f.)[4] is assumed known. Note that, like in all our previous
researches [1-3, 7-9], "monetary" and "intangible" costs, being incommensurable, are kept
separate from each other; note also that other "intangible losses", like those related to the
historical and artistic values of the buildings, are not (yet) included in the treatment.
In order to formulate a rational "intervention strategy", it is necessarY to know also how
much each possible upgrading intervention costs, and by how much it reduces the vulnerability,
and consequently damages and losses. Therefore, the possible interventions and their effects
must be modelled according to some appropriate simple "scale". In this paper, in analogy to
previous ones, the following three types, respectively denoted as "light" (L), "medium" (M) and
"heavy· (H), are considered:
i) the type L intervention consists in connecting horizontally (by "chains· or other means)
the vertical structures;
ii) in the type M intervention, besides connecting the verticai structures as in L, the
horizontal diaphragms are strengthened; .
iii) the type H intervention includes, besides the reinforcements already described under L
and M, an increment of the overall strength against horizontal actions, e.g. by the construction
of new vertical structures or the strengthening if the existing ones.
Recent Italian experiences allow to attribute fairly constant values to the cost per unit
volume of the above types of intervention performed on "old" masonry buildings (like the ones
examined in our examples), namely the following values that are used in this paper also: type L:
20000 Uras (approx. 13.5 ECUs) per cubic meter; type M: 40000 Uras (approx. 27 ECUs) per
cubic meter; type H: 80000 Uras (approx. 54 ECUs) per cubic meter.
By assumption, each type of intervention brings into the best (A) class the corresponding
item(s) of the vulnerability survey form and consequently reduces the vulnerability index of the
building.
27

In previous studies by these and other Authors. the objective of the interventions had been
set first. then the costs necessary to reach that objective evaluated: the result was then
regarded as an "indispensable" amount of money. For instance. in [7] it was stated that no
building in the relevant area should be damaged more than 40%. and the necessary
interventions were recognized for several intensities of "design earthquake"; then. introducing
the quoted unit costs of each type of intervention. the expenditures were calculated.

3. OPTIMAL RISK REDUCTION UNDER LIMITED RESOURCES

No "optimization" is involved in the preceding approach. However. as anticipated in the


introduction. the actual problem is often posed in a quite different way. Namely. only a limited
amount of resources is available. and the strategy to get the most out of the given resources is
sought: thus. the problem of "optimal resource allocation" arises.

In order to define an appropriate "optimization criterion". the objective function ("total


return") should be chosen. However. as already stated. we prefer always to keep "monetary"
and "intangible" costs separate from each other: two different objective functions and
optimization strategies are thus set up for each problem.
We define rmum gki(Xi) of an intervention of cost Xl performed on the i-th building of the
considered set. the decrease in the losses related to that building. should a quake of a given
intensity occur. Such losses can be either the monetary or the intangible ones. which will be
distinguished by changing the suffix k respectively into c or v. Since only these "discrete" types
of intervention have been considered. each function gki(Xj) is a three-step function. with steps at
the abscissae corresponding to the cost of the intervention of each considered type[1].

Thus. for either "monetary" or "intangible" losses. the two following basic problems can be
set up:
i) optimal allocation of the available resources among n buildings (or n groups of similar
buildings) located in a site in which a uniform "design earthquake intensity" is prescribed. This
problem can be formally presented as follows:

maximize the total return fk(Xl,X2 ........ Xn) = LI g Ik(Xi) (1)

subject to LI XI '" Xmax (2)


(i = 1.2•...... n)
and XI;:;' 0 (3)

where Xmax denote the maximum total expenditure allowed. and the suffix k can be either c
orv.

This case. in which the design earthquake is conventionally assumed· to occur with
probability one. allows to "distribute" optimally the resources but does not allow to evaluate the
"expected utility" of the upgrading strategy.
28

ii) optimal allocation of the available resources among n buildings (or n groups of similar
buildings) located in sites in which the probability mass functions of the significant earthquake
intensities are known: such p.m.f.'s could be the same for all buildings, but in the most general
case a probability of occurrence Pil will be defined for each intensity j and building L Thus, the
following optimization problem can be formulated:

maximize the expected total return fek(X1,X2, ....... Xn) = LI LI PII gik(Xi) (4)

subject to (5)
o= 1,2, ...... n)
and Xl ~ 0 (6)

where again Xmax denote the maximum total expenditure allowed, and the suffix k can be
either c or v.
The non-linearity and discontinuity of the relevant relationships do not allow the use of
·standard" (differential) maximization procedures. It is therefore convenient to discretize the
whole problem, including the resources that can be allocated. However, even with a
comparatively small number of buildings and of available resource units, the number of possible
alternative allocations is very large: for example, in the following numerical example (120
resource units and 30 buildings) it is of the order 1030 •
It is therefore necessary to use an appropriate algorithm. Noting that the objective function
fk or fek is the sum of as many quantities as the buildings, each in turn a function of only the
resource assigned to the i-th building, the optimization results in a "multi-stage decisional
process": such a problem can be solved by a specific optimization technique involving a
comparatively small number of operations, i.e. "dynamic programming", already well known and
widely used in other branches of research [10].

4. NUMERICAL EXAMPLE

The allocation of 120 units of resources (of 1 x 107 Uras, Le. approximatively 7000 ECUs,
each) among 30 buildings is presented now. The relevant data on the buildings and the returns
of the three types of interventions for an earthquake of MSK intensity 8.5-9 are shown in Table I.
Similar Tables can be constructed for other intensities.
Through the dynamic programming algorithm, that calculates only a limited number of
alternative solutions, the optimum points can be rapidly reached for both the "economical" and
the "intangible" return (fc and fv) and for each relevant intensity. The results obtained for fc are
diagrammatically shown in Fig.4: for each building, the number of resources units for each type
of interventions is shown on the vertical axis, while the "optimal" solutions are indicated by the
connecting lines.
29

into L int.M into H


Vuln. vol. co vo C Qc Qv C tic Ov c a~ Ov

1 225 1755 66.6 4 4 30 3 7 32.3 3.1 16 64.4 4


2 234 1071 42.8 2.5 2 17.4 1.8 4 21.3 2 10 34.0 2.5
241 942 37.7 2.9 2 12.8 2 4 16.4 2.3 8 29.8 2.9 Table I - vuln: vul-
3
4 292 1017 40.7 9.7 2 0 3.5 4 0 4 9 24.4 9.2 nerability index; vol:
5 314 1300 52.0 5.3 3 0 0 5 0 0.3 12 31.2 5 volume (m\ Co,Vo:
6 224 924 34.5 4.5 2 7.6 1.9 4 11.3 2.6 8 31.6 4.5 respectively expected
7 250 884 35.4 2.4 2 9.1 1.6 4 10.5 1.7 8 35.2 2.4 cost of repair in u.o.r.
8 246 980 39.2 1.6 2 1.5 0.6 4 11.5 1.1 9 39.0 1.6
(units of resources)
9 189 1164 27.7 1.9 2 7.5 1.1 5 11.3 1.5 10 27.7 1.9
10 196 496 13.1 0.6 1 3.4 0.3 2 5 0.4 4 13.1 0.6 and expected number
11 171 1170 21.1 0.5 2 1.8 0.1 5 9.8 0.5 11 18.9 0.5 of endangered per-
12 199 807 22.1 1.6 2 11.6 1.5 3 12.5 1.5 7 16.4 1.6 sons if no preventive
13 191 884 21.8 1 2 5.8 0.6 4 6.9 0.6 8 19.5 1 intervention has been
14 206 1722 52.0 4.5 3 12.5 2.1 7 18.7 2.9 15 SO.5. 4.5 performed; c: cost of
15 201 1610 45.5 3.5 3 23.5 3.1 6 28.3 3.4 14 45.5 3.5
intervention in u.o.r.;
16 312 366 14.6 3.5 1 0 0 1 0 0.3 3 6.5 3.1
17 231 360 14.4 1.6 1 6.2 1.1 1 7.4 1.3 3 12.9 1.6 gc,gv: respectively
18 200 1225 34.0 1.4 2 2.2 0.2 5 19.4 1.3 11 33.8 1.4 decrease of expected
19 220 527 18.8 2.4 1 4.2 1 2 5 1.2 5 16.8 2.4 cost ofrepair in u.o.r.
20 190 1932 46.8 1.7 4 26.2 1.7 8 31.6 1.7 17 40.2 1.7 and decrease of the
21 282 1000 40.0 7 2 0 0.8 4 0 1.3 9 19.3 6.4 expected number of
22 278 912 36.5 5.3 2 0 0 4 0 2.8 8 23.2 5.1
202 636 18.3 1.8 1 1.2 0.3 10.2 1.7 18.2 1.8
endangered persons
23 3 6
24 266 968 38.7 5.9 2 0 2.1 4 0 2.4 9 23.6 5.7 if the intervention
25 213 1497 48.9 2.9 3 23.5 2.3 6 25.3 2.4 13 43.9 2.9 has been performed.
26 290 1264 SO.6 11.4 3 0 0 5 0 5.2 11 20.0 9.8
27 167 913 15.5 0.6 2 5.1 0.5 4 6.1 0.6 8 15.5 0.6
28 189 581 14.0 0.6 1 3.8 0.3 2 5.6 0.5 5 14.0 0.6
29 196 430 11.4 0.6 1 2.9 0.3 2 3.5 0.4 4 11.4 0.6
30 169 525 9.1 0.3 1 0.8 0.1 2 6.8 0.3 5 9.0 0.3

Ris.
- - 1=7-7.5
17
......... I = 7.5-8
10

..
1=8-8.5
f!l
15
- - - 1=8.5-9
I I
13 I I
12 I I
II I I
I I
10
I I
I I
I I
I I
I I
I I
I I
I I
I I
I

I I
I 2 a , 5 • 7 8 • 10 II 12 13 .. 15 18 17 18 II 20 21 22 23 24 25 20 27 28 29 30 Bd.

Fig.4: Allocations of 120 resource units that maximize the decrease in the total "economical"
losses over 30 buildings.
30

As a second example, the expected returns for the same set of buildings have been
optimized. To this aim, neglecting structural damage caused by MSK intensities smaller than 7,
the larger intensities have been grouped into the following four classes (ct. eq.(4)):
j: 2 3 4
MSK Intensity: 7-7.5 7.5-8 8-8.5 ~8.5

The 30 buildings have been assumed to be located in three areas of the Central Italian
Region of Umbria, known to have different seismicities that can be defined by the following
yearly probabilities of occurrence:
Buildings Site (Town) Pli P2i P3i P4i

i= 1-10 Bastia Umbra 0.0018 0.CXJ11 0.0008 0.0011


i= 11-20 Citta' di Castello 0.0019 0.0012 0.0008 0.0012
i= 21-30 Cascia 0.0039 0.0023 0.0013 0.0017
The results pertaining to both optimizations with respect to "economical" and
"intangible" returns are shown in Fig.5, analogous to Fig.4 above .

.&
Ris. fc
14
fv
13

12

11
10

,0.,

I
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Sd,

Fig.5: Allocations of 120 resource units that maximize the decrease in either the expected total
"economical" loss or the expected total "intangible" loss over 30 buildings.

5. COMBINING "ECONOMICAL" AND "INTANGIBLE" OPTIMIZATION

In some instances, also sub-optimal solutions must be considered. For instance, one
might decide to assign the maximum useful amount of resources to some buildings of particular
importance (e.g. fire stations), and optimize the allocation of the remaining available resources
among the remaining buildings.
31

An analogous problem arises when trying to pursue at the same time both the
"economical" and the "intangible" criteria, which by their very nature cannot be combined
mathematically and, as shown in the presented examples, can be applied independently from
each other to ob tain two different "optimal solutions".
Instead, one could prefer a solution that is only "nearly optimal" from the economical
viewpoint but at the same time reduces significantly the total expected number of endangered
persons. Following a similar idea presented in an earlier paper [9], as a first tentative it has been
tried to start from the design point that corresponds to the optimal solution from the purely
"economical" viewpoint, and determine "paths" along which the decrease of the economical
return is smallest in comparison with the increase in "intangible' return (the reduction in v).
The same procedure can be followed assuming the intangible optimal solution as a starting
point, and determining the path which leads to the smallest "intangible" return, compared with
the increase in "economical" return. The choice between the two procedures depends on the
relative importance of "economical" and "intagible" returns, which remains to be decided upon
by the user.These paths have been determined for some example cases by means of an
heuristic algorithm; though it has not been possible to arrive always at a complete definition of
the minimal descent path regarding both optimal values.
An alternative approach is that of the so-called "Pareto" [11] optimal solutions, the
definition of which can be summarised in our case as follows: a solution (to whicD corresponds
a certain couple of fcP,fvP values) is optimal according to Pareto if no other solutions exist which
present values simultaneously greater than both fc,fv. The optimal pOints (which are in general
more than one) belong to the boundery line between the maximum points of fc and fvon an fc-fv
plane (see fig. 6). Further research is being made in order to individuate a more efficient
algorithm for determining these optimal points .

maxfc - _ _ .
X·~: ~ .
. .. .::.. . ~ ............. .
: . ... ~: . . '" :- \
~ .: .'. ::},:: ~
.. . ....: " . . max tv

Fig. 6 - "Pareto" optimal solutions: points belong to the boundery line between the max fc and
max fv on the fe-fv plane. In this case all possible solutions have been calculated and plotted.
32

ACKNOWLEDGEMENTS

These researches have been supported by grants from the (Italian) Ministry of University
and Research to the University of Rome "La Sapienza" and by a C.N.R.-GNDT Research
Contract with the University of Florence.

REFERENCES

[1] G.Augusti, A. Borri, E.Speranzini: Seismic vulnerability data and optimum allocation of
resources for risk reduction; Proc. 5.th Internat. Cont. on Structural Safety and Reliability
(ICOSSAR'89), S.FranCisco, August 1989, pp. 1,645-652.

[2] G.Augusti, A.Borri, E.Speranzini: Allocazione ottimale delle risorse per gli interventi di
riduzione del rischio sismico; 4.th Italian Nat. Conf. Erthquake Engineering, Milano, 1989.

[3] G.Augusti, A.Borri, ESperanzini: Optimum allocation of resources for seismic risk
reduction: economical return and "intangible" quantities; 9.th European Cont. on Earthquake
Engineering, Moscow, September 1990.

[4] G.Augusti, A.Baratta, F.Casciati: "Probabilistic Methods in Structural Engineering";


Chapman & Hall, London - New York, 1984.

[5] G.Augusti, F.Casciati: Multiple modes of structural failure: probabilities and expected
utilities; Proc. 2.d Internat. Conf. on Structural Safety and Reliability (ICOSSAR'?7), Munich;
Werner-Verlag, Dusseldorf, 1977, pp.39-57.

[6] G.Augusti, D.Benedetti. A.Corsanego: Investigations on seismic risk and seismic


vulnerability in Italy; Proc. 4.th Internat. Cont. on Structural Safety and Reliability (ICOSSAR'85),
Kobe, Japan, 1985; Vol.2, pp.267-276.

[7] G.Augusti, A.Borri: Generazione automatica di mappe del rischio sismico e dei possibili
interventi; Univ. Firenze, Civil Engineering Dept., Structures Division, PubI.No.3/86.

[8] G.Augusti, A.Borri, T.Crespeliani: Dynamic maps for seismic risk reduction and effects
of soil conditions; Univ. Firenze, Civil Engineering Dept., Geotechnical Division, PUbI.No.5/88.

[9] G.Augusti, A.Borri, F.Casciati: Structural design under random uncertainties:


economical return and "intangiple" quantities; Proc. 3.d Internat. Conf. on Structural Safety and
Reliability (ICOSSAR'81), Trondheim, 1981; pp.483-494.

[10] R.E.Beliman, S.E.Dreyfuss: 'Applied Dynamic Programming"; Princeton University


Press, Princeton, N.J., 1962.

[11] V. Pareto, Cours d'economie politique, Paris, 1896-97.


LIFE EXPECTANCY ASSESSMENT OF STRUCTURAL SYSTEMS

Bilal M. Ayyub, Gregory J. White & Thomas F. Bell-Wright


Department of Civil Engineering, University of Maryland
College Park, MD 20742, U.S.A.

ABSTRACT
The assessment of structural life expectancy for marine vessels is a relatively
complex task. This is due to uncertainties in the various parameters as well as
incomplete knowledge of their characteristics. In this a paper, a methodology for
structural life expectancy is suggested. The methodology is based on structural
reliability, theory of extremes, a plate wastage model, and system analysis. Example
applications using marine structures are discussed.

A WORKING DEFINITION OF THE END OF STRUCTURAL LIFE


A reliability assessment method was developed as part of an investigation of a
patrol boat type of vessel operated by the U. S. Coast Guard. A wide range of
possible types of structural failure for this type of vessel, the Island Class, had
been identified [Ayyub, et al 1989]. The types of failure were categorized on a
scale from catastrophic through diminishing limits of operational serviceability to
simple nuisance modes.

The reliability assessment method of quantifying structural life expectancy


requires that a "crisp" definition of the end of a vessel's structural life. Based
on structural analysis and testing of the hull [Purcell, et al 1988], a "critical
area" of the Island Class underbody containing 28 plate panels, and bounded by at
least one bulkhead and by transverse and longitudinal frames, was identified. It has
been shown that for the patrol boat type of vessel, the greatest stresses occur
within this region of the bottom, forward of amidships, where the hull structure is
exposed to slamming loads. For the Island Class, the definition of the end of
structural life was based on current practice of inspection and repair, and was
selected as 6 or more plate panels of the 28 within the critical region having a
deformation greater than 3 times the plate thickness. The fatigue criterion for the
end of structural life was the development of at least one fatigue failure at a
structural detail within the critical region. The end of structural life was thus
defined as meeting one of these two criteria.
34

THE RELIABILITY ASSESSMENT METHOD


The performance of a structural component can be expressed as

z (1)

in which the Xi'S, i -1, ... , n, are the basic random variables of loading and
strength, with g(.) being the functional relationship for a particular potential
failure mode, and Z represents the "performance." The case of Z - 0 gives the
failure surface, and then the function is defined as the limit state equation. The
case where Z < 0, represents failure, and Z > 0 represents survival. The probability
of failure can then be expressed by

(2)

where fX is the joint density function of X= (Xl' X2, ... , Xn ), and the integration is
performed over the region where Z < O. Because each of the basic random variables
has a unique distribution and they interact, this expression cannot be solved in a
closed form. The method of Monte Carlo simulation with conditional expectation and
antithetic variates variance reduction techniques were used to determine the
probability of failure according to equation (2) [Ayyub and Haldar 1984, and White
and Ayyub 1985].

THE RANDOM VARIABLES OF PLATE DEFORMATION


Except for thickness which is time-variant, the values of the physical and
elastic properties of the bottom plate components were each taken as the mean value
of a normal distribution, and a COV assigned based on experience or judgement.
Together with thickness, these were the random variables of the structural strength,
or resistance to plastic deformation. The loading was in the form of pressure due to
the buoyant force of the water, augmented by a dynamic pressure loading due to the
relative motions of the ship and the waves in which it is operating. The pressure
thus has two components: the dynamic pressure, and the pressure experienced while at
rest in calm water. This stillwater hydrostatic pressure was calculated as a
function of immersion (i.e., p hpg, where h - water depth, p - water density, and g
= gravitational acceleration), a normal distribution assumed, and a COV assigned on
the basis of judgement.

THE DYNAMIC PRESSURE


The U.S. Coast Guard had performed a series of tests on a patrol boat of the
Island class, operating the boat in different combinations of boat speed and sea
state (wave height). [Purcell, et al 1988] Strain gauges had been placed against the
bottom plating and strains recorded while driving the boat into the waves (headseas).
35

The data from these strain measurements were provided in the form of a plot against
time; each plot of data being 10 seconds long, and approximately 30 plots in each of
the 8 wave-height/vessel-speed combination (cell). The maximum strains in each 10
second interval were compiled into a histogram to determine the type of probability
distribution, and its statistical characteristics. The mean maximum strain was
converted to a mean "maximum dynamic pressure." The "design dynamic pressure" was
then calculated for the Island class, and the ratio of these two pressures
determined. The maximum dynamic pressure for other vessel types is determined by
calculating the design dynamic pressure and adjusting it by the same ratio.

PLATE THICKNESS
According to current practices of the U.S. Coast Guard, once every two years the
outside surface of each boat's hull is sandblasted in order to remove bio-fouling,
and to prepare the surface to receive the new coating system (anti-fouling paint).
This process together with some expected pitting due to galvanic corrosion, results
in a reduction in the plate thickness on the order of about one to three mils
(thousandths of an inch) per year.

As described in the next section, the performance function for plate deformation
was evaluated at the extreme value type CDF of the dynamic wave pressure loading for
each of the fifteen time periods of 2 years, 4 years, 6 years, etc., up to 30 years.
The result of those calculations is the probability in each period, of the strength
being exceeded by an extreme loading at some time during that period. The greater
the length of the period, the greater the probability of a higher loading. In other
words, an extreme load is evaluated in a time period T, where T can be any value from
zero to the design life of the boat. This extreme load can occur at any time, t,
within the time period T. Therefore the wastage must be characterized over the whole
period T, not at some discrete place such as the end or the middle.

According to the Seawater Corrosion Handbook (1979), the coefficient of


variation of plate wastage is qualitatively estimated to be in the range of 0.1 to
0.25 and the distribution is expected to be Lognormal. Furthermore, the rate of
wastage is assumed to be stationary in time, i.e., its mean value, COV and
distribution type are independent of time. In the following discussion, the
distinction between "wastage" and "wastage rate" should be noted.

The Wastage Allowance Model


For a wastage rate that has a mean value of Rm ' a standard deviation of u R and
a coefficient of variation of COV(R), the wastage at the end of any year t is given
by
36

101t - t Rm (3)
and
(4)

in which Iol t is the mean value of wastage at time t, and COV(lol t ) is the coefficient of
variation of the wastage. The mean value of wastage increases linearly with time,
but the uncertainty, i.e., COV, increases with time also.

Plate wastage is a non-stationary, stochastic process. It can be simulated


using Monte Carlo methods and converted into a random variable. The mean value of
wastage for the end of each year, and COV were known from equation (4) above. The
distribution was assumed to be lognormal. For each period T, values of wastage for
the end of each year in the period were generated. A year would be chosen randomly,
and then a value for wastage generated. This was repeated over many cycles.
Finally, the data thus generated were analyzed to determine the mean and COV for the
period, and the distribution type. The net effect is the projection of all values
throughout the period onto the wastage axis. The simulation was done for wastage
rates Rm' of I, 2 and 3 mpy, COV(R) of 0.1, 0.25 and 0.4 and time periods T, of 1, 5,
10, 15, 20, 25 and 30 years. Regression analysis was applied to provide functions
that give values for the mean and COV of wastage at any time period and within a
range of wastage rates. The mean value of wastage was given by

lola - 0.926733 ~ T(0.8082) (5)

where ~ - mean wastage rate in mils/ year, and T - the period in years. The COV of
wastage was given by
0.41305 [COV(R)] (0.2864) T(0.29933) (6)

where COV(R) is the COV of wastage rate.

Based on a distribution goodness-of-fit test, it was concluded that the wastage


allowance can be considered to follow a normal probability distribution with negative
values discarded. The lognormal distribution does not have negative values, but its
shape does not represent wastage very well.

THE PERFORMANCE FUNCTION FOR PLATE DEFORMATION


The performance function contains the total strength of the plating to resist
the combined hydrostatic and hydrodynamic loading, minus the hydrostatic loading
itself. The units are those of pressure. Because the plating is relatively thin,
and because the duration of the pressure pulse is long relative to the natural period
of vibration of the plate, a formulation by Hughes (1981) based on principles of
elastoplastic analysis, can be used [Ayyub, et al 1989].
37

EXTREME PRESSURE -- THE CDF OF THE CONDITIONING VARIABLE


The conditioning variable for plate deformation was taken as the extreme wave
pressure loading, and its CDF, the type I extreme value distribution. Extreme values
are predicted (or extrapolated) based on the known data and its characteristics.

For large values of n, the extreme value distribution approaches an asymptotic


form that is dependent only on the form of the tail of the parent distribution in the
direction of the extreme value. For probability distributions with exponential
tails, the extreme value probability distribution approaches a distribution of double
exponential form as n ~~. For example, a normal or lognormal probability
distribution approaches the Type I extreme value distribution. In this case, the
difference between an exact probability distribution for ~ and the Type I extreme
value distribution is relatively small and is practically negligible for n larger
than approximately 25.

For a normal initial probability distribution of the random variable X with a


mean value ~ and standard deviation a, the CDF of the largest value ~ of n sets of
data for random variable X is given by

FMn (m) (7)

and the PDF of ~ is given by

(an/a) Exp[-(an/a)(m - ~ - a un)]

Exp(-Exp[(-an/a)(m - ~ - a ~)]) (8)

where parameters an' and un are given by

2 In(n) ]1/2 (9)

an - ( In[ln(n)] + In(4~) )/(2a n ) (10)

The mean value and standard deviation of ~ are given by

(11)

and

SD(~) (12)

The constants ~ and ~ have the values of 3.14 and 0.577, respectively.
38

THE CDF OF LIFE


In this study, 30 sets of data, 10 seconds long had been provided for each of
the 8 vessel-speed/sea-state cells. Each data point was the maximum in one 10 second
interval. Variable n, in the equations (7 to 12), represents how many 10 second
intervals there are expected to be in that cell in the particular time period. This
was determined by combining two more random variables. Data for the number of
operating hours per year, and the percent use in each cell, were provided by the
captains of the patrol boats, and their statistical characteristics determined. This
is referred to as the operating profile.

Equation (7) is the CDF of the conditioning variable in the conditional


expectation variance reduction technique, and is the extreme dynamic pressure. The
parameters an' and un are calculated from n by equations (9) and (10), and the mean
value ~ and standard deviation a are calculated from the mean value and standard
deviation of the maximum dynamic pressure by equations (11) and (12). Equation (7)
is then evaluated at m, the function of random variables, which in this case is the
elasto-plastic resistance minus the hydrostatic pressure.

The calculation was performed on data that was simulated from the
characteristics of the random variables using the antithetic variates variance
reduction technique. Two thousand cycles of simulation were used for each of the
fifteen time periods considered, and the calculation performed once for each of the
two antithetic variates, in each cycle. The average of the results over all the
cycles in each time period gives the probability of failure in plate deformation, of
some one plate, according to the deformation ratio criterion, in each time period.

The criteria for failure in plate deformation (i.e. end of structural life) had
been selected as 6 out of 28 plates meeting the deformation ratio criterion, as
described previously. The binomial distribution, given by

N n!
P fn/N - I P k (l-P ) (n-k)
fp fp
(13)
k=n k! (n-k)!

provides the probability of at least n events out of N trials, Pm/ N, given the
probability of one event, Pfp' and based on the assumption that the events of the N
trials are independent. In this case, the "event" was the failure of some one plate,
and the N "trials" were the 28 possible failures. The failure events are actually
not completely independent, however the degree of dependence is small, and could be
assumed negligible. The resulting probability is the probability of meeting or
exceeding the failure criterion defining the end of structural life, in the time
period under consideration. The calculation was made for each of the 15 time periods
considered, and the resulting curve gave the cumulative distribution function of
structural life according to the plate deformation failure mode criteria.
39

THE PERFORMANCE FUNCTION FOR THE FATIGUE FAILURE HODE


As stated before, the criterion for the end of structural life due to fatigue
failure, was the development of at least one fatigue crack in the critical region.
From the construction drawings, the types of structural details were identified
according to the classification by Munse et al. (1982), and for which they had given
the characteristics of the S-N curve. The S-N method (stress range versus number of
cycles to failure) of fatigue investigation is considered to be more appropriate than
the methods of fracture mechanics because of the difficulty of inspecting details
that are largely inaccessible and/or painted.

The first step in the analysis was to characterize the loading. The original
test data was compiled into a stress range histogram, with data from each operating
cell weighted according to the proportion of time historically spent in that wave-
height/vessel-speed combination. The histogram is expressed as a matrix of 42 "bins"
that are 100 psi wide, with values ranging from 600 to 160700, with the percentage of
the total number of data in each bin.

In the S-N data, the cycles to failure are given for each detail for a range of
constant cyclic loads, therefore the histogram had to be converted into an equivalent
constant load, called the "equivalent constant amplitude stress range." This was
accomplished using the Palmgren-Miner rule which states that the cumulative energy of
constant amplitude loading is equal to the cumulative energy of variable amplitude
loading. Thi"s is given by

n m ) l/m
Sre - [ I a(k) Pc(k) (14)
k-l

where Sre is the equivalent constant amplitude stress for the detail, a(k) is the
value of the kth stress data bin, Pc(k) is the percent of all the data points found
in the kth bin, and m is the slope of the S-N line plotted on a log-log scale.

The mean value of life for the detail was then calculated from the S-N curve as
follows:

L _ 10(10g C)
1000
( _ _ __
] m
(15)
Sre Fsr

where log C is the intercept of the S-N line, Sre is the equivalent constant
amplitude stress range, m is the slope of the (log) S-N line, and Fsr is an
adjustment ratio. The histogram stresses were calculated from strains that were
40

measured in one particular ship, and at some distance from the fatigue details. The
ratio Fsr corrects the histogram stresses to what they would be in the particular
detail, and in the ship in question.

The number of loading cycles was assumed to follow a Weibull distribution. The
Weibul shape parameters w, and k were calculated from the mean value and standard
deviation of the number of cycles required for failure at the equivalent constant
amplitude stress range. Parameter k was a function of only COV, and was calculated
for each detail considered. Parameter w is given by

L
w = ------------------ (16)
1

k
]
where r is the Gamma function, and other variables are as given previously.

The number of loading cycles is generated as the product of three random


variables: annual use, percent use, and the number of loading cycles per hour. In
this case, the conditioning variable is the number of load cycles required for
failure, and its CDF is based on the Weibul distribution. The number of actual load
cycles evaluated at the CDF of the cycles required to produce failure is the
probability that the actual number of load cycles is less than that required for
failure (i.e., the probability of survival). The result is therefore subtracted from
unity. The expression is given by

(17)

where Pff/IJK = the probability of failure in fatigue for the Ith simulation cycle, the
J th fatigue detail, and the Kth time period; Lcp is the number of loading cycles in
the period, and wand k are the Weibul shape parameters. The probability of failure
is again the mean of this calculation over all cycles of simulation. A stable
solution was generated in 500 cycles of simulation.

SUMMARY AND CONCLUSIONS


A computer model has been constructed using the method of Monte-Carlo simulation
with conditional expectation to estimate the probability that the load will be
greater than the resistance (failure). The loading and resistance in each cycle of
simulation are calculated from values generated for each variable from its assumed
statistical distribution. The probability of failure (P f ) is then the average over
all cycles of the cumulative distribution function of the loading, evaluated at the
resistance, (or one minus the CDF of the resistance evaluated at the loading).
41

A critical region is defined in the ship's bottom forward of mid-length. Two


modes of failure are considered: plastic deformation of hull plating due to the ship
slamming into waves, and fatigue failure of structural connections due to random
cyclic loading. The resistance to plastic deformation is calculated as follows. The
measured strains are first converted to terms of pressure. The design dynamic
pressure is then calculated using standard methods of ship design, and multiplied by
the ratio of the pressure derived from the measured strain, to the calculated dynamic
pressure, for the ship in which the strains were measured. The loading is calculated
from the record of measured strains in the following way. The record is divided into
time intervals and the maximum pressure in each interval found. From the mean and
coefficient of variation (COV) of these maxima an expected extreme pressure over any
particular period can be found using the theory of extremes.

The fatigue calculation is similar in concept. From the record of measured


stresses, a histogram is constructed to represent the probability density function of
the loading. Each fatigue detail investigated is assigned a ratio to relate the
stresses in the detail to the expected stresses in the location of the strain gauges,
adjusted by the ratio of Von Mises stress in the vessel to the Von Mises stress in
the ship in which the strains were measured. Using this ratio and the slope of the
S-N line for the detail, an equivalent constant amplitude stress range is calculated.
The parameters of the Weibull distribution (of the S-N line) are calculated at the
number of cycles corresponding to the constant amplitude stress. The equation of the
Weibull distribution (the CDF of the S-N curve on the axis of cycles) is then
evaluated at the value of generated loading cycles in the simulation cycle and for
the period considered. This is the probability that the actual loading cycles will
be less than the number of cycles required for failure of the particular detail,
i.e., the probability of survival. The analysis of an aging patrol boat class
correlated very well with a recent structural report and the decommissioning record.

ACKNOWLEDGEMENTS
The authors would like to acknowledge the support provided for this project by
the United States Coast Guard, the United States Naval Academy, and the University of
Maryland. Parts of this paper will appear in the May 1990 issue of the Naval
Engineers Journal, American Society of Naval Engineers.
42

REFERENCES
1. Ayyub, B.M., Haldar, A., (1984), "Practical Structural Reliability Techniques,"
Journal of Structural Engineering, American Society of Civil Engineers, Vol. 110,
No.8, Paper No. 19062, August 1984, pp. 1707-1724.
2. Ayyub, B.M., G.J. White, and T.F. Bell-Wright (1989), "Reliability-Based
Comparative Life Expectancy Assessment of Patrol Boat Hull Structures," Report
Submitted to U.S. Coast Guard R&D Center, Avery Point, August 1989.
3. Hughes, O.F., (1981), "Design of Laterally Loaded Plating - Uniform Pressure
Loads," Journal of Ship Research, SNAME, Vol. 25, No.2, June, 1981, pp. 77-89.
4. Hughes, O.F., (1983), Ship Structural Design: A Rationally-Based, Computer-Aided,
Optimization Approach, John Wiley and Sons , New York, NY.
5. Munse, W.H., T.W. Wilbur, M.L. Tel1alian, K. Nicoll, and K. Wilson, (1982),
"Fatigue Characterization of Fabricated Ship Details for Design," Ship Structure
Committee Report SSC-318, 1982.
6. Purcell, E.S., Allen, S.J., and Walker, R.J., (1988), "Structural Analysis of the
U. S. Coast Guard Island Class Patrol Boat," Trans., SNAME, Vol. 96, 1988.
7. Schumacher, M., (1979), Seawater Corrosion Handbook, Noyes Data Corporation, Park
Ridge, New Jersey.
8. White, G.J., and B.M. Ayyub, (1985), "Reliability Methods for Ship Structures,"
Naval Engineers Journal, ASNE, Vol. 97, No.4, May, 1985, pp. 86-96.
PARAMETER SENSITIVITY OF FAILURE PROBABILITIES

Karl Breitung
Seminar fur angewandte Stochastik der U niversitat M unchen
Akademiestr. I/IV, D-8000 Munchen 40, FRGermany

1 Introduction
In many reliability problems the solution depends on various parameters, whose values are not known exactly.
Often for these parameters some reasonable estimates are calculated and then these estimates are treated
as if they were the true parameter values.
The essential lack of these methods is that they neglect of the gap between the given data and the
mathematical model. With the data the structure of the model and its parameters can be estimated only
with some uncertainty. The problem of calculating the structural safety in such cases is considered in [8].
It is important to have information about the influence of changes in the parameters on the result, which
is in general the failure probability. First results for this problem can be found in the Ph. D. thesis of M.
Hohenbichler [10]. A drawback of this work is that for non-normal random variables no analytic results are
obtained. Further only the influence on the beta value and not on the failure probability is considered.
The basic idea for constructing simple approximations for these sensitivity factors is the same as for
deriving estimates for the failure probability. The Laplace method is used for obtaining asymptotic approx-
imations of these quantities.
These results are also important for the optimization of structures, when an optimal set of parameters
has to be found, which minimizes the failure probability under some restrictions. Further applications are
omission sensitivity factors (see [11]).

2 Definition of the problem

Given is a random vector X(O) = (Xl(O), ... , Xn(O)), which depends on a parameter vector 0 = (01, ... , Ok).
For all 0 the random vector X(O) has a continuous probability density 10 (a: ) with 10 (a: ) > 0 for alle a: E JR n.
The loglikelihood function ofthe density is then 10( a:) = In(fO( a:)).
Further is given a limit state function 9 : JRn --+ JR, a: ...... g( a:, 0), which depends on the same parameter
vector O. For fixed 0 the n-dimensional space is divided into a failure domain F(O) = {a:; g(a:, 0) ::; O} and
the safe domain S(O) = {a:; g(a:, 0) > O}. The limit state surface G(O) = {a:;g(a:,O) = O} is the boundary
of the failure domain.
The failure probability P O(F) is given by

P O(F) = f 10 (a: )da: (1)


9(a:,0)$0

Here the failure probability depends on the parameter vector O. Changes in the parameters have an influence
on the failure probability.
44

For the approximate computation of the failure probabilities P o( F) asymptotic methods have been
developed. The case of normal random variables is treated in [3]. A generalization of this method for
non-normal random variables without the use of transformation techniques is given in [4] and [6]. For this
function P O(F) of 0 in the following asymptotic approximations for the partial derivatives with respect to
the 8;'s are derived.

3 Sensitivity and elasticity

The partial derivative of a function with respect to a variable (parameter) indicates the influence of this
variable on the value of the function. But since the value of the partial derivative depends on the scale
used for this variable, it may be difficult to compare these values for different variables. Therefore it can be
useful to measure the sensitivity to one variable by the partial elasticity. This measure of sensitivity is used
in mathematical economics. A definition can be found in [7], p. 195. The partial elasticity of a function
f( a:) with respect to the variable X; is given by

.(f( )) = ~. af(a:) (2)


(, a: f(a:) ax;

This quantity is independent of the scale. Approximately, (;(I(a:)) is the change of f(a:) in percents, if Xi

is changed 1 percent.

4 Parameter dependence of integrals

In this section a general expression for the partial derivatives of multivariate integrals depending on param-
eters with respect to these parameters is given.
First we consider the one-dimensional case. Given are two continuously differentiable functions a( r) and
b(r), and a continuous function f(x,r), which has a continuous partial derivative with respect to To

Consider an one-dimensional integral, depending on the parameter r in the form


b(r)
J f(x,r)dx.
a(r)

The Leibniz theorem for the differentiation of an integral gives for the partial derivative with respect to
To (see [1], p. 11,3.3.7):

a b(r) b(r)a f( X r)
-a J f(x,r)dx = J -a-'-dx + b'(r)f(b(r),r)- a'(r)f(a(r),r). (3)
r a(r) a(r) r

The first summand describes the influence of r on the function f( x, t) and the second the influence on the
boundary points.
This result can be generalized for functions of several variables. Here the boundary of the integration
domain is a surface in the n-dimensional space.
Let be given two continuously differentiable functions f : JRn x 1-+ JR, (a:, r) ...... f( a:, r) and 9 : JRn x I -+

JR, (a:, r) ...... g(a:, r) with I an open interval. Under some regularity conditions then the integral

F(r) = J f(a:,r)da: (4)


9(a:,r):O;O
45

exists for all rEI and the partial derivative of this integral with respect r is given by

dF(r) _ J lr(x,r)dx - J J(y,r)gT(y,r)lV'yg(y,r)I-1dsT(y) (5)


dr - g(X,T)~O G(T)
Here is defined G T = {x;g(x,r) = O} and dsT(y) denotes the surface integration over G(r).
In the following a short prooffor this result is given. First we consider the difference F(r + h) - F(r);

F(r + h) - F(r) = J J(x, r + h)dx - J J(x, r)dx = (6)


g(x,T+h)~O g(X,T)~O

J (I(x, r + h) - J(x, r))dx + (7)


q(X,T+h)~O

=:D, (h)

+ J J(x,r+h)dx- J J(x,r+h)dx (8)


~(X,T+h)~O • g(X,T)~O
=:D,(h)

For small ( > 0 in a neighborhood G,( r) = {x; minYEG(T) Iy - xl < (} of G( r) a coordinate system can be
introduced, where each y E G,( r) which is given in the form y = x +8 .n(x) (here n(x) denotes the surface
normal at x) with x E G T has the coordinates (x, 8). In this coordinate system the integral D 2 ( h) can be
written in the form
l(x,h)
D2(h) = J J J(x H· n(x),r)D(x,8)d8dsT(x). (9)
g(X,T)=O 0

Here D( x, 8) is the transformation determinant for the change of the coordinates. Due to the definition of
the coordinates D( x, 0) = 1. The function I( x, h) is defined implicitly by the equation g( x + I( x, h )n( x), r+
h) = o. The existence of this function can be proven (for sufficiently small h) using the implicit function
theorem (see [9], p. 148), if always V'yg(y,r) fo 0 for y E G.
In the limit for h --+ 0;
lim -hI D2(h) = J Ih(X,O)J(x,r)dsT(x) (10)
h~O g(X,T)=O
Making a Taylor expansion of the function g( x, r) we find

g(x + I(x,h)n(x),r+ h) = g(x,r) + l(x,h)(nT(x)V'xg(x)) + hgT(x,r) + o(h) = 0 (11)

With a Taylor expansion of l(x,h) this yields

(12)

Since nT(x)V'xg(x) = lV'xg(x,r)l, from the last equation we get;


h .Ih( x, O)IV' xg( x, r)1 = -h . gT( x, r) + o(h) (13)
Ih(X,O) = -gT(x,r)lV'xg(x,r)I- 1 (14)

This gives for D2(h);

lim -hI D2(h) = - J gT(x,r)lV'xg(x,r)I-1J(x,r)dsT(x) (15)


h~O g(X,T)=O
For the first integral DI (h) we obtain, since the domain {x; g( x, r + h) :::; O} for h --+ 0 approaches the
domain {x;g(x,r) :::; O}, by interchanging integration and partial differentiation, which is possible under
slight regularity conditions (see [9], p. 238);

lim -hI DI(h) = J Mx,r)dx (16)


h~O g(X,T)~O

With equations 15 and 16 together the result is obtained.


46

5 Asymptotics

In this section two results of the Laplace method for domain and surface integrals are given. Proofs can be
found in [5]. We consider integrals of the form

1((3) = J!t(x) exp((32h(x ))dx. (17)


F

Here F = {x;g(x) ~ OJ.


The asymptotic behavior of these integrals can be studied with the Laplace method (for a description of
this method see [2]. If there is only one point x' on the boundary G = {x;g(x) = OJ, where the function
h( x) achieves its global maximum, the following asymptotic approximation can be derived

(18)

(19)

with

H(x') - (h(x') _ lV'h(x)l g . ·(x'))· .


- OJ IV'g( x)1 OJ o,J=l, ... ,n (20)

and the n x (n - 1) matrix A(x') = (al(x'), ... , an_leX')) is composod of vectors ai(x'), which constitute
an orthonormal basis of the tangential space of G at x'. The n x n-matrix C(x') is the cofactor matrix of
the n X n-matrix H(x').
In the same wayan approximation for surface integrals can be derived.

(21)

6 Asymptotic approximations for probability integrals

If there is given an arbitrary probability density f : JRn --t JR, x 1-+ f( x) with f( x) > 0 for all x E F C JR n ,
the integral can be written in the following form

1= J exp(ln(f(x)))dx. (22)
F

If the probability content of F is small we can assume, that the density f( x) is also small in F and that
then the logarithm lex) = In(f(x)) is negative for all points in F. Defining

(23)

then the integral can be written in the form

1 = J exp((36 h(x)))dx. (24)


F

Here is defined
(25)
47

Defining now a function


(26)

we have Laplace-type-integrals. For the asymptotic behavior of this function for {3 -> 00 the usual methods
can be applied.
H there is only one point :r:. on the boundary G, where the integrand is maximal, under some regularity
conditions (see [4) and [5]) the following approximation for the failure probability is obtained

P(F) '" (211')(n-l)/2 f(z·) (27)


IVl(z*)Iv'1 det(H*(z*»1

Here H·(z·) = AT(z*)H(z·)A(z·). with

H(z·) - (l··(z·) - IVl(z)1 g .. (z·»· . 1 (28)


-'J IVg(z)1 'J '.J= .....n

The matrix A(z*) is defined as in the last paragraph.

7 Asymptotic approximations for the sensitivity factors


Given is a family of n-dimensional probability densities f(z,8), depending on a parameter 8 E IRk. We
assume that f(z, 8) > 0 for all (z, 8) E IR n x D with Dc IRk being an open subset of IRk .. The loglikelihood
function l(z,8) = In(f(z, 8» is a function of:r: and 8 .
For a fixed parameter value 8 ED the probability content P8(F) of a subset Fe IR n is given by

P8(F) = J exp(I(:r:,8»d:r:. (29)


F

The partial derivative P8(F» with respect to Bi is by interchanging differentiation and integration

oP8(F) = J 8l(z,8) (l( 8»d (30)


OBi OBi exp:r:, z.
F

The partial elasticity £i(P8(F) of the probability P8(F) with respect to Bi is then:

Bi J 8l(:r:, 8)
£i (P8 ()
F = P8(F) ~exp(l(z,8»d:r: (31)
F

As scaling factor a suitable {30 is chosen, for example {30 = v'- maxFl(:r:, 8*) for a 8* with maxFl(z, 8*) < O.
Replacing now in the integral the function l(z,8) by h(z,8) = 1(:r:,8)/{3~, we obtain

oP8(F) = J{32 oh(:r:, 8) (f./ 2h( 8»d (32)


OBi 0 OBi exp I'o:r:, :r:
F

and in the same way for the partial elasticity with respect to Bj

Bi J 2 oh(:r:, 8)
£i (P8 (F » =P 2
(F) {30~exp({3oh(:r:,8»dz. (33)
8 F •

We define now the integrals P~(F) by

P~(F) = J exp({32h(:r:, 8»d:r: (34)


F
48

We assume now that there is only one point :r' on the boundary of F, where the loglikehood function
achieves its global maximum with respect to F. For the partial derivatives and elasticities then the following
asymptotic approximations are obtained using equation 18

oP~(F) ~ 81(:r',O)pf3 (F) (!!...)2 {J-oo (35)


OOi OOi 0 {Jo'
. 81(:r*,0) ({J)2
(i(P~(F)) ~ Oi . OOi {Jo,{J - 00. (36)

This yields for the partial derivative with respect to Oi at 0':

oP~O(F) I ~ 81(:r',O')pf30 (F)


(37)
00· 00· 0
• 0=0' •
The general case, where also the limit state function depends on the parameter vector can be treated in a
similar way. We restrict our considerations on the case of one parameter T.

We define the integrals P~(F) by

P~(F) = J exp({J2h(:r, ,))d:r (38)


9(:r,T):'OO
with h(:r,T) = {J(j21(:r,T).
From the equation 5 we get the following representation of the partial derivative with respect to l'

oP~(F) = {J2 J Oh(:r,T)ef32h(:r,T)d:r _ J gT(:r,T) ef32h(:r,T)ds a (:r). (39)


aT 9(:r,T):'00 aT 9(:r,T)=0 IV'g:r(:r, 1')1
Here dsa(x) denotes surface integration over C. For these integrals asymptotic approximations can be
derived using the results in paragraph 5. For the first integral the approximation is:
{J2 J oh(x, 1') ef32h(X,T)d:r ~ (40)
9(:r,T):'OO aT
~ {J2 0h (x',T) J ef32h(X,T)d:r ~ ( 41)
aT 9(:r,T):'OO

~ (:J 2 8l(~:, T)P~(F) (42)

For the second integral, the surface integral, equation 21 gives:

J gT(X,T) ef32h(X,T)dsa(x) ~ ( 43)


9(X,T)=0 lV'gx(x, 1')1
gT(X',T) J ef32h (X,T)d Sa ()
x ~
lV'gx(x',T)1 9(x,T)=0
~ (!!...)2 gT(x',T)IV'x l (:r',T)l p f3(F)
{Jo lV'gx(x',T)1 T

The last relation in the equation above is found by comparing the equations 18 and 21:

J ei32h(X,T)dsa(:r) ~ (!!...)2 1V'x l (:r·,T)1 J ei32h (X,T)dx (44)


~~~ ~ ~~~
Multiplied by
P~(F).
(& r IV' xl( x·, 1')1 the surface integral is an asymptotic approximation for the probability

Adding the two approximations gives the final result:

oP~(F) ~ (81(x',T) _ gT(:r',T)IV'x l (:r',T)I)pi3(F) (!!...)2 (45)


aT aT lV'gx(x', 1')1 T {Jo
49

8 Example
Let be given two independent random variables Xl and X 2 , each with a lognormal distribution with
JE(ln(X;)) = /l and var(X;) = 0"2 for i = 1,2. The joint probability density function f(xb X2) of these
random variables is then

( 46)

The loglikelihood function is

( 47)

We consider as limit state function the function

(48)

For the derivatives of the loglikelihood function we get:

(49)

(50)

The gradients of I and 9 are with c-y = (1 + In(;!-I')

(51)

At the point (-y, ')'), the only global maximum of the loglikelihood function, we obtain for the norm of these
vectors:

1V'1(')',')')1 = ...;'2C-y ,,),-1 (52)


lV'g(')',')')1 = ...;'2')' (53)

. For the Hessian at (')', ')')

(54)

we get then

(55)

The gradient of I at (')',')') is parallel to the vector (1,1), therefore a unit vector orthogonal to this vector is
the vector al = (1/...;'2, -1/...;'2). This gives

(56)

The joint density at (')', ')') is

(57)
50

Now, using the approximation formula 27, we obtain as approximation for the failure probability the following

P (F) - .,j2; . f( )- (58)


A - v'2(1 + In(;~-") '1'-20'-1 1', l' -
1 (In(')') - JL)2
(59)
.,j2;v'20'(1 + In(;~-") exp( - 0'2 )

The partial derivatives of the loglikelihood function at (,)" 1') with respect to the parameters JL and 0' are:

(60)

(61)

For the derivative of g at (1', 1') with respect to l' we find:

g-y = 21' (62)

The exact probability content of the failure domain is given by

P(F) = 01>( -v'2. In(')') - JL) (63)


0'
By differentiating this function with respect to the parameters JL, 0' and 1', we obtain the true sensitivities
of the failure probability to changes in these parameters.
For the partial elasticities of the approximations P A(F) we get using equation 45

2 In(')')-JL
f,,(PA(F)) JL 0'2 (64)

fq(PA(F)) 2 (-1 + cn(')'~ - JL) 2) (65)

f-y(PA(F) 2(1+ In(')') - JL) (66)


0'2
The partial elasticities of the true failure probability are asymptotically for l' --+ 00

In(1') - JL
f,,(P(F)) 2JL • 0'2 (67)

fq(P(F)) 2. (In(')') - JL)2 (68)


0'
f-y(P(F))
2 . In( 1') - JL
0'2 (69)

Asymptotically the approximations of the elasticities approach the true elasticities.

9 Summary and conclusions

In this paper asymptotic approximations for the sensitivity of the failure probability to changes in the
parameters are derived. The parameter influence can be of a general form, i.e. the probability density and
the limit state function may depend on the parameters.
The results give only approximations. They can be improved by using suitable numerical or Monte-Carlo
integration techniques.
51

References
[1] M. Abramowitz and LA. Stegun. Handbook of Mathematical Functions. Dover, New York, 1965.

[2] N. Bleistein and R.A. Handelsman. Asymptotic Expansions of Integrals. Holt, Rinehart and Winston,
New York, N.Y., 1975.
[3] K. Breitung. Asymptotic approximations for multinormal integrals. Journal of the Engineering Me-
chanics Division ASCE, 110(3):357-366, 1984.
[4] K. Breitung. Asymptotic approximations for probability integrals. Journal of Probabilistic Engineering
Mechanics, 4(4):187-190,1989.
[5] K. Breitung. Asymptotische Approximationen fur Wahrscheinlichkeitsintegrale. 1990. Habilitations-
schrift eingereicht an der Fakultat fiir Philosophie, Wissenschaftstheorie und Statistik der Ludwig-
Maximilians-Universitat Miinchen.

[6] K. Breitung. Probability approximations by loglikelihood maximization. 1990. Submitted to the Journal
of the Engineering Mechanics Division, ASCE.

[7] A.C. Chiang. Fundamental Methods of Mathematical Economics. McGraw-Hill International Book
Company, Tokyo, third edition, 1984.

[8] A. Der Kiureghian. Measures of structural safety under imperfect states of knowledge. Journal of the
Engineering Mechanics Division ASCE, 115(5):1119-1140, 1989.
[9] W. Fleming. Functions of Several Variables. Springer, New York, 1977.

[10] M. Hohenbichler. Mathematische Grundlagen der Zuverliissigkeitstheorie erster Ordnung und einige
Erweiterungen. PhD thesis, Technical University of Munich, Munich, FRGermany, 1984.

[11] H. Madsen. Omission sensitivity factors. Structural Safety, 5:35-45, 1988.


EXPECTATION RATIO VERSUS PROBABILITY

F. Vasco Costa
Consulmar, Rua Joaquim A. Aguiar, 27
1000 Lisbon, Portugal

ABSTRACT

The design of structural systems is usually based on the consideration

of a probability of failure regarded as acceptable. As the gravity

of the consequences of a failure with a given probability of occurrence,

as well as the expenses required to reduce such probability, can much

vary from a structural system to another, it is advocated that the

design of structural systems be based, instead, on the consideration of

the ratio of the expectation of benefits to be gained by rendering a

system safer to the consequent increase in its cost.

INTRODUCTION

How safe do we need to design the elements of a structural system? That

will depend not only an the reliability of the information available an the

actions to be exerted an the elements, an the ability of the elements to

resist such actions, an the seriousness of direct and indirect consequences

that can result from the possible modes and degrees of damage and, last but

nat the least, how much it will cost to reduce the risks of being reached

the distinct possible modes and degrees of damage.


~

When the frequency distributions of extremely high actions and of extremely

low resistances are known. and monetary values can be attributed to the

possible consequences of the distinct modes and degrees of damage. the

selection of the convenient degree of safety can be dealt with as a simple


economic problem, namely by minimizing the sum of the initial cost with the

present value of future expenses with maintenance plus the expectation of

all direct and indirect expenses that can result from distinct modes and

degrees of damage. (Vasco Costa. 1987)

In the following paragraphs it is advocated that the design of structural

systems be based not just on the consideration of their probabilities of

failure. but on the consideration of ratios of the expectation of benefits

that can result from increases in safety to the expectation of the

consequent increases in cost.

It is hoped that not only designers but all who have to participate in the

selection. among alternative deSigns, of the one sufficiently safe without

being too costly. will find the expectation ratio a convenient concept in

which to base their decisions.

THE CONCEPT OF ACCEPTABLE PROBABILITY OF FAILURE

The consequences of the failure of structures of a particular type can much

vary from a structure to another, even when submitted to the same actions.

For instance, if along a river some dykes are built to protect plantations

against floods and other dykes to protect densely inhabited regions, the

failure of these will certainly bring ml1ch more serious consequences.


55

As absolute safety is never achievable, it will be convenient to increase

the safety of the dykes for the protection of the inhabited regione, even

if at the cost of reducing the safety of the dykes for the protection of

the plantations. It will also be convenient, as it has been a common

practice in the Netherlands, to build some secondary dykes, diving the

inhabited regions in small areas, so as to reduce the gravity of the

consequences of eventual local failures of the main dyke.

However, there is a well marked tendency to stipulate for each type of

engineering structure a probability of failure to be regarded as acceptable,

without taking into consideration differences in the seriousness of the

possible consequences of their failure. For most types of structure the

probability of failure regarded as acceptable is in the order of 1~4 to

10- 6 per year (Baecher, et al. 1980, pg. 455; Burke, 1982. pg. 129).

The statement that in an ideally designed structure all its elements have

to present a same probability of failure is to be found in the technical

literature (Sorensen and Jensen, 1985, pg. 63). In fact, as the consequen-

ces of failure differ from element to element, depending on their function

and location, it will be convenient to render stronger the elements

which failure will bring more serious consequences.

The concept of acceptable probability of failure, because not permitting to

take into due account the specific functions to be fulfilled and the

distinct consequences of failure of each particular structure and of each

particular element of a same structure, cannot be regarded as an adequate

tool for the optimization of structural systems.


56
THE CONCEPT OF UTILITY

The concept of "utility" has been introduced having in view to permit to

compare alternative designs without having to put tag prices on human and

social values, as prestige, comfort, happiness and even human lives.

Possibly because too much dependent of personal attitudes towards events

affecting small or large numbers of human beings and involving small or

large amounts of money, the use of the utility concept did not spread as it

was originally expected.

But can risks involving human and social values be dealt with as if they

were just economic values? Certainly not!

We all know that some people are more willing than others to incur in

risks, that we behave quite differently depending on our wealth, on being

or not in a hurry and on being the designer, the owner, the operator or the

user of a particular structure. For a frank and stimulating discussion

on how we can perceive and react to risks se.e "Technological Risks", (Lind,

ed, 1982), "Structural Safety" (Ferry and Castanheta, 1985) and "Levels of

Environmental Decisions" (Svenson and Fischhof, 1985).

THE CONCEPT OF EXPECTATION

The use of the concept of "expectation", meaning by that the product of the

probability of occurrence of an event by the amount of money to be gained

or lost in case of the occurrence of such event, is more appropriate than

that of the simple probability for the selection of the degree of safety
57

more adequate to each structure and to each element of a particular

structure. This is because it permits to take into due consideration,

besides the probability of occurrence of possible modes of damage, the

gravity of their consequences.

The expectation of benefits and of costs do not have, necessarily, to be

expressed in monetary terms. Such expectation can be expressed, for

instance, in terms of the number of working hours gained or lost, or of

lives protected or endangered. What is essential is to express benefits

and costs in a common unit, be it dollars, hours, lives or any other unit,

if they are to be compared.

THE CONCEPT OF EXPECTATION RATIO

The expectation ratiO, meaning by that the ratio of the expectation of

benefits to be gained to the expectation of costs to be incurred, can be

regarded as a measure of the economical utility of a system. The higher

the value of such ratio the better.

Expectation of benefits
E (1)
Expectation of costs

Sure benefits are to be included in the numerator and sure costs in the

denominator of the fraction (1) taking into account that their probabilities

are equal to 1.

Being an adimensional entity, like probability, such ratio presents, over

the simple concepts of expectation of benefits or of costs, the advantage


58

of being less dependent on subjective evaluations involving social and

human values. This assuming that personal biases will equally affect the

evaluation of benefits and of costs.

Different persons in a given occasion, or a same person but in different

occasions, will not likely appraise with a same criterion, large and small,

sure and probable, present and future benefits or costs, if human or social

values are involved. But if one person apppraises all of them in a given

occasion, the values found for the ratio of the sum of sure plus probable

benefits to the sum of sure plus probable costs will certainly be less

affected by personal bias than the values attributed to each single benefit

or each single cost.

The present values of future benefits and costs are to be evaluated by

dividing the benefits and costs expected for each period of time the

structural n
system will be in operationn by (1 + r) , being r the rate of

interest and n the number of periods of time the system has been kept in

service (Baecher et al. 1980, pg. 450, and Vasco Costa, 1987, pg. 73).

The expectation ratio can be regarded as an extension of the concept of

annual economic risk-benefit factor, suggested by Baecher et al. (1980,

pg. 451), as a tool to help to decide when the construction of a new dam is

to be authorized.

Evaluation and selection of expectation ratios for different types of

structures and for different elements of a same structure are to be

evaluated taking into account not only the particular functions each of

them will have to fulfil but, as well, how their eventual failuTe will

affect the behaviour of others, how failures will propagate once started,
59

how much will repairs cost and for how long will the structure, for each

mode of failure, remain out of service or just rendering a deficient

service. (Vasco Costa, 1987)

The evaluation of expectations of direct and indirect probable benefits, as

well as that of all possible consequences of different modes and degrees of

damage of the distinct elements of a structure, will not always be an easy

task, (Lind, 1982, GUedes-Soares and Viana, 1985, and Svenson and Fischhof,

1985). But there can be no doubt that the taking into account of such

expectations, even if only evaluated in approximate terms, will contribute

to improve the reliability of structural systems.

Probabilities of failure to be regarded as acceptable can vary between wide

limits, say from 10- 2 to 10-8 ,depending on social or human values

affected and amounts of money involved. Values of benefits and of costs

can also vary between wide limits, from a few dollars to several billion

dollars, depending on the importance of the type of structural system being

considered. In general, the larger the benefits or the costs that can

result fron the occurrence of an event, the smaller will be the probability

of such occurrence.

Expectations, because being the product of benefits or costs, that usually

increase when their probabilities decrease, can be assumed to vary between

much narrower limits than the simple probabilities. Expectations ratios,

because being the ratio of expectations, can be expected to vary between

even narrower limits, say from 1 to 3.


60

In fact, the utilization of structural systems having expectation ratios

below unity will not be economically profitable and, on the other hand,

only in exceptional cases will the expectation of benefits reach values

several times larger than that of the expectation of costs.

The use of the expectation ratio concept will present particular interest

when alternative designs are to be compared, the more reliable being more

costly. When comparing alternative designs, it will be convenient to take

into account the expectations of only the extra benefits to be gained and

of the extra costs to be incurred.

Expectation of extra benefits


E (2)
Expectation of extra costs

To improve traffic conditions will it.be preferable to widen an existing

road or to build one more road? Does it pay to design a structure with

redundant elements or will it be preferable to keep in stock spare parts?

Which elements of a system will it be convenient to make stronger and how

stronger? By comparing expectation ratios it will be possible to answer

such type of questions.

As the values of the expectation ratios to be regarded as convenient will

not vary between such wide limits as the values of acceptable probabilities

of failure, and will not be so much affected by personal biases, it can be

hoped that the selection of acceptable expectation ratios will not pose

such serious difficulties as those posed by the selection of acceptable

probabilities of failure.
61

FINAL CONSIDERATIONS

The scope of engineering design is to put to the best use for society the

resources, the means and the information available. In the case of the

design of the elements of a structural system, such information has to

include, besides the probabilistic distributions of actions to be exerted

on the system, the probabilistic distribution of resistances of its

elements and, last but not the least, estimates of all direct and indirect

expenses incurred in case of being reached the possible modes and degrees

of damage.

Structural systems with elements presenting, all of them, a same

expectation ratio of failure, as here recommended, instead of a same

probability of failure, as it is the current practice, will possibly fail

more often, but, as their failure will not bring such serious consequences,

they will render a better service in the loug run.

As e:{pectation ratios ~re adimensional entities and their evaluation will

be less affected by personal biases in the evaluation of benefits and

of costs involving social or human values, they will permit to select, in a

more objective way than current practices, which elements of a structure

and which structures will be convenient to reinforce, or to render

lighter, in order to put to the best use the resources, the means and

all information available on the actions to be exerted, on the behaviour

of the elements of structural systems, on the possible consequences of

distinct modes and degrees of failures and, what shall never be forgotten,

how much will increases in safety imply increases in weight and in costs.
62

REFERENCES

Beacher, G., X. Elizabeth Pate and Richard de Neufville: RISK OF DAX

FAILURE IN BENEFIT-COST ANALYSIS, Water Resources Research, Vol. 16,


NQ 3, June 1980.

Burke, F.B.: DECISIONS ABOUT PUBLIC DANGERS. A MDDEL STRUCTURE FOR A

VALUATION PROCESS, Technological Risks, N.C. Lind, ed. University of

Waterloo, Ontario, Canada, 1982.

Ferry Borges, J. and X. Castanheta: STRUCTURAL SAFETY, National Laboratory

of Civil Engineering, Lisbon, Portugal, 3rd Ed. 1985.

Guedes-Soares, C. and P.C. Viana: SENSITIVITY OF THE RESPONSE OF MARINE

STRUCTURES TO WAVE CLIMATOLOGY, Computer Xodeling in Ocean Engineering,

Schrefler & Zienkiewics (eds.), 1988, Balkema, Rotterdam.

Lind, N.C. (ed): TECHNOLOGICAL RISKS, Proceedings of a Symposium held in

1981 at the University of Waterloo, Ontario, Canada, 1982.

Sorensen, T. and O.J. Jensen: EXPERIENCE GAINED FROM BREAKWATER FAILURES,

Breakwaters 85, Thomas Telford, London, 1985.

Svenson, D. and B. Fischhof: LEVELS OF ENVIRONMENTAL DECISIONS, Journal of

Environmental PsychOlogy, 5, 1985.

Vasco Costa, F.: RELIABILITY OF PARTLY DAMAGED STRUCTURES, First IFIP WG


7.5 Working Conference, Sprinter-Verlag, Berlin, 1987.

Vasco Costa, F.: PERCEPTION OF RISKS AND REACTION TO ACCIDENTS, Second IFIP

WC 7.5 Working Conference, Springer-Verlag, Berlin, 1987.


APPLICATION OF PROBABILISTIC STRUCTURAL MODELLING TO
ELASTOPLASTIC AND TRANSIENT ANALYSIS

T. A. Cruse*, H. R. Millwater**, S. V. Harren** &. J. B. Dias***


*Vanderbilt University
Nashville, TN USA 37235
**Southwest Research Institute
San Antonio, TX USA 78228
***Stanford University
Stanford, CA USA 94305

Introduction

The application of advanced probabilistic finite element methods to elastoplastic response of a


pressurized cylinder and the transient response of a beam to a moving load is reported. The probabilistic
structural analysis methodology is based on a unique integration of a specially developed finite element
code together with a probability integration method capable of providing accurate predictions of the
cumulative probability distribution for a wide range of structural variables, including linear, nonlinear,
static, and dynamic problems.

The integrated analysis code system is NESSUS 1 (Numerical Evaluation of Stochastic Structures
Under Stress). NESSUS provides an integrated analysis system for automated calculations of the
probabilistic output in terms of user-defined random variables and random fields. The structural analysis
code uses perturbation algorithms which are used to define the sensitivity of the response variables to
changes in the random variables. A structural reliability code, referred to herein as the fast probability
integration (FPI) algorithm, combines this sensitivity data with the statistical models for each random
variable to predict the random response.

The paper briefly highlights the elements of the NESSUS system and the supporting algorithms.
Application of NESSUS to two significant, time-dependent problems is reported in order to validate the
capability of the NESSUS code.

NESSUSIFPI & FEM Code Overview

The major tool used in structural reliability analysis of civil structures has been the stochastic finite

1 NESSUS'" is a registered Trademark.


64

element method [1,2]. Stochastic FEM solutions are in the general class of first-order, second moment
formulations. First-order formulations make use of the first-order Taylor series expansion of the stiffness
matrix in terms of each of the independent random variables {x}, as shown in Eq. (1)

(1)

where the subscript-O denotes the deterministic value and E{} is the expectation operator. The coupled set
of equations giving the variance in the displacement solution is given by

(2)

for the case of no cross-correlation between the independent random variables, {x}. The resulting systems
of equations are then solved for the expected displacements and the second moment (variance) of the
displacements. The approximation of first order dependencies for the random variables is only valid if the
variance in each random variable is small (e.g., a few percent). It should also be noted that the stochastic
FEM solution does not make use of any information on the type of distributions for the random variables
and the solution variables. Thus the method is suitable only for approximate random variable assessments,
valid near the mean value of the solution.

Relatively recent work in reliability analysis has focused on the development of rapid estimation
algorithms for integrating the volume under the probability density function for a multivariable problem
[3, 4, and 5]. Figure 1 illustrates a joint probability density function for two variables, Xl and Xz. The
two limiting curves, Z(x)=Constant, represent performance function levels, such as displacement or stress
magnitudes, natural frequencies, etc., given by a response model of the system and a specified limit
condition on the response.

The probability that the response exceeds the specified limit condition is computed by integrating
the volume under the joint-PDF surface beyond the appropriate Z(x)=Constant limit state. Monte Carlo
simulation estimates this volume by sampling a number of solutions to determine how many are beyond
the limit curve, relative to the number inside the limit curve. The volume under this surface may be
estimated first by replacing the actual variables by "reduced" or standard normal variables

(x.-~.)
u. - _'__
' (3)
, (Ji
65

Joint Prob. Density f.!{ IX)

Figure 1: Joint PDF for Two Random Variables

The reduced variable formulation assures that the operations to estimate the probability level or reliability
are invariant with respect to the actual form of Z(x)=Constant used [6]. When the physical variables are
rewritten in this form, the joint PDF for the problem in Figure 1 may be seen in projection as a set of
concentric circles of constant (J (standard deviations) as shown in Figure 2.

The response surface g(u)=O in Figure 2 is generally a nonlinear function of the two random
variables and results from mapping Z(x)=Constant into the new variable space. The most probable point
(MPP) is given by the point on g(u)=O which is closest to the origin. This is usually determined by fitting
a local tangent to g(u) and moving this tangent until the MPP is estimated [3, 4]. If the joint probability
density functions for each random variable are normal distributions, the probability of exceeding the g(u)=O
limit state is estimated by fitting g(u) by a hyperplane (First Order Reliability Method; FORM), or by a
hyper-quadratic surface (Second Order Reliability Method; SORM) and computing the distance p from the
origin to the MPP. Pis the reliability index for the limit state. For the case of normal distributions and a
linear limit state g(u)=O, p is directly related to the probability of exceeding the limit state

p(g(U»O)-41( -P) (4)

where 41( -P) is the cumulative distribution function for the normal distribution, evaluated at -po
When the distributions are not normal distributions, Reference [3] uses a normal mapping for the
variables that results in "equiValent" normal distributions for each variable. Calculation of p for the
mapped "equivalent" normal distributions provides an estimate of the probability of exceeding the limit
state through (4).

In the algorithm of Wu [5], the response surface Z(xJ is approximated to be linear or pure
quadratic (no mixed terms). The non-normal random variables may be mapped into new "equivalent"
normal distributions using the same normal mapping of [4], but modified by a third parameter taken from
66

o.::::;..---lf--+"r--+-- U ,

Figure 2: Joint PDF for Two Reduced Variables

the following optimal fitting

l) [ f-(Ai ~(U) - F(ui)} ell (UN_iWdu i] I, - 0 (5)

In (5), eIl(u) is the Gaussian distribution probability density function, serving as a weighting function, and
A is a constant for each of the "equivalent" nonnal distributions. Each A, and the values of ~,O" for the
equivalent cumulative distribution, Cll(u), are computed from (5) in order to minimize the error in the
variable mapping in the least squares sense. The resulting approximate g(u)=O is then fitted with a
hyperplane (FORM) to determine the MPP, as above. The value of Pis then computed for the MPP
distance, and the probability of exceeding g(u)=O is taken from (4).

In the NESSUS approach to probabilistic FEM analysis, the response function Z(X;) is
approximated by a set of finite element solutions near the current MPP. The NESSUS code was developed
in order to compute perturbed solutions near a detenninistic state in an efficient manner [7]. The
resulting set of solutions to Z(X;) can then be used to fit a hyperplane or hyperquadratic surface to Z.
After the fitting to Z(X;), the probability of exceeding a limit state for g(Z)=O is computed through the
equivalent normal variable mapping.

The NESSUS code utilizes two solution algorithms for linear problems: a standard displacement
method; and a mixed approach based on nodal variable equilibrium. The mixed approach solves the
simultaneous field problems of equilibrium in terms of nodal data. To do this, the strains and stresses are
projected to the nodes from the integration points. The mixed fonnulation used is based on an application
of the Hu-Washizu variational principle. The variational fonnulation is satisfied in an iterative manner as
described in [8] and has the following three-field form
67

o
[}, D
-C T
(6)

The set of relations in (6) might be solved directly, except for the substantial matrix size issues.
NESSUS uses an iteration strategy to obtain nodal equilibrium from (6). The deterministic stiffness matrix
is factorized to obtain a displacement update in the iteration sequence, given a set of nodal stresses

(7)

The strain projection to the node for the current displacement state is then given by

(8)

while the nodal stress recovery is then given by

(9)

The system of equations is iteratively solved and updated until suitable convergence is achieved. At this
point, the nodal values of displacement and stress are obtained which "best" represent equilibrium for the
applied loading or displacement conditions.

Use of a nodal approach to probabilistic modeling is favored over the standard displacement
approach. In the standard approach there is an ambiguity as to what are the values of nodal stress for
which the probability level is to be assigned. In the NESSUS approach, there is no ambiguity, and the
values of nodal stress are seen to be defined consistent with overall equilibrium requirements as imposed
by the mixed formulation.

In the case of probabilistic modeling, we wish to obtain the solution of the equilibrium problem for
conditions that are perturbed with respect to each random variable. That is, the random problem is
represented as the following set of relations, where the hats denote random variables

(B,D,C)-(B,D,C)+A(B,D,C)
U-II+&I (18)

i-/+ll/

The perturbations in (10) are obtained first for the displacements by taking the effect of each random
68

variable change into account in a pseudo-force vector

(11)

where the strain and stress tenns for the {n+l} iterate are obtained using the same algorithm as for the
nodal equilibrium updating.

So long as the perturbations are sufficiently close to the original state, the convergence is generally
quite satisfactory. The NESSUS code allows the user to take perturbations on one or both sides of the
base state, and to take as many perturbations of each variable as desired. A hyperplane or a
hyperquadratic (no mixed tenns) surface is then fitted to the resulting perturbation data to define the
approximate response function, Z(xJ.

The use of a perturbation algorithm has been found to save significant time over a full
refactorization of the stiffness matrix. In the case of eigenvalue analysis, the perturbed solutions are
obtained from the base solution through sub-space iteration. Transient linear and nonlinear problems use
the same "equilibrium shift" as the above iteration scheme to update all of the transient matrices at each
load step.

The general NESSUS algorithm is to combine the Wu algorithm for estimating the probability of
excedance from an approximate Z(x) function [5], and to use the perturbation algorithm to develop a
database for obtaining Z(x) approximations about a specified design point.

The usual NESSUS solution begins by taking perturbations of the random variables about the
initial, mean value state (i.e., the deterministic solution). The response surface is fitted to the perturbation
data and the deterministic solution. Generally, a first order (hyperplane) fit is sufficient. The FPI
algorithm uses the approximation of Z(x) to compute the probability of excedance for various levels of the
response function.

Figure 3 illustrates the results for a beam vibration example, denoting the first-order solution as the
MVFO (mean value, first order) solution for the probability of exceeding various natural frequencies. A
normally distributed solution would fall on a straight line in this figure which uses the Gaussian
distribution scale for the vertical axis. The cross-data points are computed natural frequencies based on a
set of default probability levels.

Each discrete solution point has a defined MPP condition from the FORM algorithm described
above. The MPP consists of the set of random design variable values that correspond to the calculated
level of probability. These are the values of the random variables that are the ones most likely to occur
for that probability level.

The MVFO solution is not sufficiently accurate away from the deterministic solution, as can be
seen by comparison of the predicted distribution with the "exact" solution for this problem, obtained by an
analytical solution of the beam vibration problem with log-normal random variables. However, assuming
that the probability level remains constant, we can update the MVFO solution by substituting the MPP
69

MVFO. AMVFO and Exact Solutions


50%

+ MVFO
10%
='"
:iico
o AMVFO

.Q
e 2%
.,>
0.

~:J
E 0.1%
:J
U

0.003%

200 240 280 320 360 400 440 480


Frequenc.y (RadiSec)

Figure 3: lliustrative Application of the NESSUS Algorithm

conditions into the system equations and perfonn a new solution of the resulting equations. The updating
procedure is referred to as the advanced-MVFO (AMVFO) solution [9]. It is seen in Figure 3 that the
updated solution is quite close to the exact solution.

Iteration at the current solution (design) point may be perfonned by requiring a new set of
perturbations at the MPP condition. The FPI algorithm can then be applied to the new approximated
response function to update the probability level. This process can be repeated as needed for convergence.
In practice, the result is generally sufficiently accurate that iterations are not needed. NESSUS permits the
user to allow the code to test for the need to perfonn iterations by computing local perturbations at each
discrete solution point.

The elastoplastic capability of NESSUS is based on the standard von Mises fonnulation with
isotropic, kinematic, and combined hardening rules. The elastoplastic curve is modeled as a piecewise
linear interpolation of effective stress versus effective plastic strain. Perturbation of the nonlinear solution
is obtained by iteration, as in the linear case, with the current histories for each random variable updated
based on the detenninistic, incremental solution.

The transient load history fonnulation is a Newmark-p algorithm modified for the mixed
variational fonnulation. Dynamic equilibrium at the end of each time step is used to estimate a new
displacement correction tenn in the iteration algorithm for the mixed method. Nodal displacements,
strains, and stresses are updated in the same manner as for the static problem.

Application to Elastoplastic Analysis

The elastoplasticity validation problem is shown in Figure 4. The mesh consists of twenty equally
spaced axisymmetric ring elements loaded by an internal pressure applied incrementally. The internal
pressure magnitude and the yield stress are taken to be random variables. Zero strain hardening is
assumed.
70

E=30 xla> psi


v=03
Plas1.,
zone
(Jy =
45 x 103 psi

P psi I Din
'0=20 in rjr::.

I· ,j/= lOin
.1
Figure 4: Thick-Walled Cylinder Model

The performance variable is taken to be the radial stress at 1.25 units out from the inner radius.
Figure 5 plots the COF for the radial stress at this radius in terms of the standard normal variable levels of
o. The MVFO solution is based on the sensitivity factors from NESSUS derived when the body is still
elastic. It is seen that the COF is accurately predicted only for the elastic regime (+<J). The exact solution
is obtained by Monte Carlo simulations of the known plasticity solution for this problem.

ME.AH VAlUE FIRS T·~Di: ~


--tr- ADVAHCED I.II£A.N
vA~ vE FIRST .oAO(FI Pl'Iase" PN.sei
... 1ST ITEAATIQt. AESUl JS.

D
<
o PhaU'111
"

-,

RAOIAl STRESS (PSI)

Figure 5: Radial Stress COF Oistributions Compared to Monte Carlo Results


The AMVFO solution results correspond to taking the MPP definition from the original MVFO
solution, and performing new incremental solutions for these design points. A solution is obtained at each
specified, discrete probability level shown. Again, the solution shows significant error with respect to the
Monte Carlo results.

The solution was perturbed at the MPP (design point) for two probability levels shown. Using the
MPP conditions from these perturbation solutions, two new incremental solutions have been obtained,
which now fall quite close to the Monte Carlo results.
71

The reason for the significant error in the AMVFO solution is seen in Figure 6, which plots the
probabilistic sensitivity factors taken to be the direction cosines of ~ in Figure 2. In the Phase I region,
the results are totally elastic and the sensitivity factors reflect little influence of the yield stress, and nearly
total (linear) dependence on the applied pressure. This is correct only for the +0" regime of the results.

For points in the regions denoted Phase IT and Phase ill, the results are increasingly dependent on
the effect of the yield stress and its variance. Figure 6 shows that the correct sensitivity factors for these
regions show a transition to dominance by the yield stress. The sensitivity results predicted by NESSUS at
the two iteration conditions show good agreement with the computed factors.
I.'
Phase III Phase II Phase I

1.0

MVFO PI


.
a MVFOoy
0
FPI ANALYTICAL Pi

FPI ANALYTICAL oy
0.6
• NES hillER Pi
>
• NES lsi ITER oy

~ 0 ..

0.2

-. -,
00
-3 -2

PROS LEVEL lUI

Figure 6: Probabilistic Sensitivity Factors for Three Response Phases

Two conclusions are drawn from this set of results. First, if the perturbations do not involve the
physics of part of the distribution (i.e., plasticity), then the answer will have significant error. Second, the
AMVFO solution algorithm, which provides an automatic updating and accuracy checking capability, does
converge to the correct result.

Beam With a Moving Load

The second incremental solution validation problem is for a simply supported beam subject to a
moving point load. The stiffness and density of the beam, and the constant velocity of the load are taken
to be the random variables.

The analytical solution is taken to be the Euler-Bernoulli solution, appropriate for shallow beams.
While the NESSUS code uses a Timoshenko formulation, the shear effect is minimized by modeling a
shallow beam. The NESSUS model uses double nodes to capture the shear load shift as the point load
passes the node, ~=O.25, and y=O.50 (integration parameters). Two hundred time steps were used, and the
deterministic solution was stable and in excellent agreement with the analytical solution, as shown in
Figure 7.
72

'l's;l
4.0
• !
i
!
r-----------------
0.0

Ql>C -4.0

-8.0
I t= O.032s
- - ANAlYTICAl SOlUTION
a DISPLACDoIENT t.IElHOO
+ MDCED MElHOO
-~O~~~~~~~~~~~~~~~~~
0.0 10.0 20.0 30.0 40.0 SO.O
Z,ft
Figure 7: Comparison of NESSUS Beam Solution for Travelling Load with Exact
Solution (Rotation Angle at Left Hand Beam End)

The COF results for the beam left-hand end angle at a fixed rime point are shown in Figure 8,
again using standard normal variable plotting. Monte Carlo simulation of the exact beam equation is
shown, along with the NESSUS/FEM MVFO COF solution. The AMVFO solution for the -2.7a point is
also shown, and is in excellent agreement with the exact solution. The deterministic Monte Carlo solution
has been shifted by 0.0064 to agree with the NESSUS results at the mean value state.

3.0

o MONTE CARLO -- 10 000 SAt.IPl.ES


2.0 0 MONTE CARLO • 0.9936

+
o FEM IAVFO
1.0 rEM AMVro

:J 0.0

-1.0

-2.0

-3.0 -I-~--r''-r___,-r-~~_r........--.-r-.__~~-r_,
-24.0 -20.0 -16.0 -12.0 -8.0
8x ·'0--
Figure 8: COF Results for Beam End Rotation

Conclusions

The application of NESSUS to the probabilistic analysis of structural components demonstrates that
efficient and accurate predictions of the uncertainties in structural responses for transient problems is quite
accurate. The AMVFO algorithm is quite capable of accurate prediction of the full distribution so long as
the physical conditions in the COF are contained in the perturbation. For cases when that is not true, there
is strong evidence that the iteration algorithm which performs perturbations in the tail of the distribution is
quite capable of correcting the error in the AMVFO solution.
73

Acknowledgements

The authors wish to acknowledge the contributions of their coworker, Dr. Iustin Wu of Southwest
Research Institute. We also gratefully acknowledge the substantial support of Dr. C. C. Chamis, NASA
Project Engineer for the PSAM effort. This research is sponsored by NASA (LeRC) under contract NAS3-
24389.

References

1. A. Der Kiureghian and I.-B. Ke, "The Stochastic Finite Element Method in Structural Reliability,"
Probabilistic Engineering Mechanics, 3, 83-91 (1988).

2. W. K. Liu, A. Mani, and T. Belytschko, "Finite Element Methods in Probabilistic Mechanics,"


Probabilistic Engineering Mechanics, 2, 201-213 (1987).

3. R. Rackwitz and B. Fiessler, "Structural Reliability Under Combined Random Load Sequences,"
Computers & Structures, 9, 489-494 (1978).

4. X. Chen and N. C. Lind, "A New Algorithm for Structural Reliability Estimation," Structural Safety,
1, 169-176 (1983).

5. Y.-T. Wu, "Demonstration of a New, Fast Probability Integration Method for Reliability Analysis,"
Journal of Engineering for Industry, ASME Transactions, 109, 24-28 (1987).

6. A. M. Hasofer and N. C. Lind, "Exact and Invariant Second-Moment Code Format," Journal of the
Engineering Mechanics Division, ASCE, 100, 111-121 (1974).

7. I. B. Dias, I. C. Nagtegaal, and S. Nakazawa, "Iterative Perturbation Algorithms in Probabilistic Finite


Element Analysis", in Computational Mechanics of Probabilistic and Reliability Analysis, ed. by
W. K. Liu and T. Belytschko, Elmepress International, Lausanne, Switzerland (1989).

8. S. Nakazawa, I. C. Nagtegaal, and O. C. Zienkiewicz, "Iterative Methods for Mixed Finite Element
Formulations," Hybrid and Mixed Finite Element Methods, AMD 74, ed. R. L. Spilker and K. W.
Reed, ASME, New York, NY (1985).

9. Y.-T. Wu, O. H. Burnside, and T. A. Cruse, "Probabilistic Methods for Structural Response Analysis,"
in Computational Mechanics of Probabilistic and Reliability Analysis, ed. W. K. Liu and T.
Belytschko, Elmepress International, Lausanne Switzerland (1989).
RELIABILITY-BASED SHAPE OPTIMIZATION
USING STOCHASTIC FINITE ELEMENT METHODS

Ib Enevoldsen, J. D. Sf6rensen & G. Sigurdsson


Dept. of Building Technology and Structural Engineering
University of Aalborg, Sohngaardsholmsvej 57, DK-9000 Aalborg, Denmark

1. Introduction

Application of first-order reliability methods FORM (see Madsen, Krenk & Lind [8]) in structural
design problems has attracted growing interest in recent years, see e.g. Frangopol [4], Murotsu,
Kishi, Okada, Yonezawa & Taguchi [9] and S0rensen [14]. In probabilistically based optimal design
of structural systems some of the quantities used in the modelling are modelled as stochastic vari-
ables. The stochastic variables are usually related to the strength, the loads or the mathematical
model. However, generally a more realistic modelling of some of the uncertain quantities is ob-
tained by using stochastic fields (e.g. loads and material parameters such as Young's modulus and
the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis
can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian &
Ke [6] and Liu & Der Kiureghian [7].

In this paper a reliability-based shape optimization problem is formulated with the total expected
cost as objective function and some requirements for the reliability measures (element or systems
reliability measures) as constraints, see section 2. As design variables sizing variables (diameters,
thicknesses etc.) and shape variables (geometrical variables) are used.

The shape optimization problem is formulated with requirements for element reliability indices in
the constraints. These element reliability measures are calculated with a stochastic finite element
program where the stochastic finite elements are modelled by stochastic variables through the
midpoint method (see section 3). Hence, the spatial random nature of material measures is taken
into account in the reliability calculations.

In section 4 methods to perform sensitivity analysis in an effective manner for both the reliability
analysis and the shape optimization are presented. A computer program is developed and finally,
in section 5, two simple examples are considered, namely 1) shape optimization of a corbel with a
stochastic vertical load and 2) optimization of the geometry of a hole in a plate.
76

2. Reliability-Based Shape Optimization

A number of structural shape optimization problems based on reliability measures can be formu-
lated, see e.g. Sl!lrensen [13]. Here the shape optimization problem is formulated with element
reliability constraints.

min W(z) (1)

s.t. (3i(Z) ~ (3jin ,i = 1,2,···,N (2)

< z·1 <


Z1'."in - - Z'."o.,
1 ' l' = 1, 2
, ...M
, (3)

where Z10 Z2,"', ZM are the optimization variables and (310 (32,"', (3N are element reliability indices.
(3,!,in, i = 1,2,·· . , N are the corresponding minimum acceptable element reliability indices: z'J'in
and z'J'GZ are simple lower and upper bounds of Zj • W(z) is the objective function which can
e.g. be the weight or the total expected cost of the structural system. The optimization problem
(1)-(3) is generally non- linear and non-convex.

2.1 First Order Reliability Methods (FORM)

Random variables X = (X1o X 2 ,···, Xn) are used to model uncertain quantities connected with
the description of the parameters in the response calculations. Failure elements are used to model
potential failure modes of the structural system. Each failure element is described by a failure
function: g(x, z) = 0 where z is the vector of optimization variables. Realizations x of X, where
g(x, z) ~ 0, correspond to failure states while realizations x where g(x, z) > 0 correspond to safe
states.

In FORM analysis, see e.g. Madsen, Krenk & Lind [8], a transformation T of the correlated
and non-normally distributed stochastic variables X into standardized and normally distributed
variables fJ = (U}, U2 ,···, Un) is defined (X = T(fJ)). In the u space the reliability index (3 is then
defined as,

(4)

The reliability index (3 is thus determined by solving an optimization problem with one constraint.
The optimization problem is generally non-convex and non-linear and can in principle be solved
using any general non-linear optimization algorithm, but the iteration algorithm developed by
Rackwitz and Fiessler see e.g. Madsen, Krenk & Lind [8] (based on sequential quadratic program-
ming) has shown to be effective in FORM analysis.
77

2.2 Optimization Procedures

The optimization problem (1) - (3) can be solved using any general non-linear optimization algo-
rithm. In this paper the NLPQL algorithm developed by Schittkowski [10] is used in the examples
shown in section 5. The NLPQL algorithm is based on the optimization method by Han, Powell
and Wilson, see Gill, Murray & Wright [5]. Generally, it is a very effective method where each
iteration consists of two steps. The first step is determination of a search direction by 'solving a
quadratic optimization problem formed by a quadratic approximation of the Lagrange function of
the non-linear optimization problem and a linearization of the constraints at the current design
point. The second step is a line search with an augmented Lagrangian merit function.

NLPQL requires estimates of the gradients of the objective function and the constraints. If the
structural weight is used as objective function the gradients of W(z) are easily determined numer-
ically or analytically. Gradients of the reliability constraints, for which numerical estimation is
generally time-consuming, can be determined semi- analytically, as described in section 4.

3. Stochastic Finite Element Formulation

Consider an uncertainty property Y, related to the response calculation of a structure modelled as


an spatial random field {Y(s), s E O} where 0 is the set of possible coordinates of the structure,
see Vanmarcke [15]. If FORM-analysis is to be used to estimate the reliability it is necessary to
introduce a discretization by random variables. This can be performed by introducing stochastic
finite elements (similar to usual finite elements in finite element-analysis).

Several approaches have been suggested to represent random fields by random variables in stochas-
tic finite elements. The most general, (and most numerically stable) method is to use the midpoint
method, see Der Kiureghian & Ke [6J. The randomness in a stochastic element is represented by
a stochastic variable Xi assigned to the centroid of a stochastic element:

Xi = Y(mi) (5)

where mi are the coordinates of the centroid in stochastic finite element no. i

The expected value, standard deviation and distribution of Xi are the same as those of Y(mi).
The correlation between the variables X = {XI, X 2, ••• , X n} is defined by:
(6)

where pyy is the autocorrelation coefficient function of the stochastic field.


78

3.1 Stochastic Finite Element Mesh Generation

In general the midpoint method implies that the variability of the field within the elements is
overestimated, see Der Kiureghian & Ke [6]. This can be avoided by selecting a finer stochastic el-
ement mesh. The stochastic mesh is generally determined according to the following two stochastic
element mesh generating rules:

1) The element mesh should be so fine that the fluctuation of the random field (measured by
the correlation length defined as the length L y over which the autocorrelation coefficient function
decreases to a small value, e.g. e- 1 ) can be represented. For the midpoint method a stochastic
element size from Ly / 4 to Ly /2 is recommended from experience in [6].

2) The elements must not be so small that high correlated stochastic variables of adjacent elements
cause numerical instability in the transformation (X = T(U)), see section 2.

From the above rules it is suggested in [6] that a FEM-mesh is selected so that it satisfies the
requirements for an ordinary FEM-mesh and the above-mentioned requirements based on the
fluctuation of the random field. The elements in the stochastic finite element mesh are then
selected so that each element is a block of one or more FEM-elements.
Further information concerning stochastic finite element mesh generation and mesh generation in
shape optimization can be obtained from the examples in section 5.

4. Sensitivity Analysis

From section 2 it seen that if first order optimization methods are used an important part of a
reliability-based shape optimization is to calculate the gradients of reliability indices and failure
functions with effective and fast methods, see also Sj2Irensen & Enevoldsen [11]. It is possible to
use an ordinary numerical differentiation but this is generally inaccurate and inefficient because
it requires a large number of stochastic finite element response calculations. It is faster and more
reliable to use semi-analytical gradient calculations as shown in the following.

Consider a failure element assigned to a critical point A placed in a finite element and a corre-
sponding stochastic finite element. The failure function and response formulas is written (linear
elasticity is assumed)

g(u,z,u[u,z,a(u,z)],a[u,z]) = 0 (7)

K[u, z] a[u, z] = P[u, z] (8)

u[u,z,a(u,z)] = C[u,z] B[u,z] a[u,z] (9)

where u is a realization of the stochastic variables, g the failure function, z optimization variables,
79

a node displacements, (j stress vector for the point A, K stiffness matrix, P load vector, C material
matrix and Ii strain-displacement matrix, see Bathe [1).

dg;/ du j (needed in the reliability index calculations for solution of the problem in (4» for fixed z
is then calculated as

(10)

where (9) is used to determine

(11)

and aa/{)uj is determined from (8) as

(12)

Notice that a and ::. in (11) are the local displacements and the derivatives of the local displace-
J

ments selected from the global displacements and from the derivatives of the global displacements
: : in (12). All the remaining partial derivatives in (10) - (12) can be determined analytically or
J

numerically.

df3;/dz j (needed in the solution of the shape optimal problem in (1) - (3» is calculated based on
a sensitivity analysis of the optimality conditions for the optimization problem in (4)

(13)

where

(14)

and

(15)

All the remaining derivatives in the above formulas are relatively easily calculated by numerical
differentiation. Hereby a considerable number of stiffness matrix assemblies and inversions can be
omitted compared to a simple numerical differentiation scheme, so a large reduction of computer
time is achieved.
80

5. Examples

In the following 2 examples problems are solved using a computer program developed for plane
problems using the methods and theories presented in the previous sections. An impression of the
program is given in figure 1 where the flow-chart of the program is seen.

Calculate the response and g!.

Reliability subprogram
using stochastic
finite element methods.

L....._ _ _""N.:::.o-< Is fJ!I. optimal ?

Yes

Is m equal
L....._ _....:..:N.:::.o-< the number of element
reliability
constraints ?

No Is ,i+ 1 the
'-------'=-< optimal point?

Figure 1. Flow-chart of a reliability-based shape optimization program using stochastic finite


element methods. (j is the iteration number in the shape optimization, i is the iteration number
in the reliability index calculation and m is the constraint number.)
81

Example 1: Shape Optimization of a Corbel

The corbel is shown i figure 2. The loads on the corbel consist of a distributed load P acting under
an 60 degree angle.

Pj2
P/4 P/4
6..60 0

260

180

160 (thickness 10 mm)

Figure 2. Geometrical model of the corbel with loads. (measurements in mm)

Shape Optimization Model

The shape optimization problem is formulated as described in section 2 with the weight of the
corbel (measured by the area for fixed thickness) as the objective function.

The geometry of the top side is kept constant. The remaining geometry is optimized using 5 design
variables z assigned to 5 moving directions in the 4 master nodes shown in figure 3 a. The geometry
between the governing master nodes is limited to circles or lines.
/1

a) b)

Figure 3. a) • Master nodes and optimization variables with moving directions. b) FEM-mesh
with assignment of constraints.
82

In the shape optimization problem (1) - (3), the constraints (2) ensure that the reliability is
satisfactory at the critical points ( One constraint is connected with each critical point Ai, i =
1,···,N)
This is a difficult task (particularly if the structure is large or complicated) and may be carried
out by preanalysis or simply by selection of many critical points. The critical points have to be
chosen in such a way that the critical points during the whole optimization process are included.

In this small model only 5 critical points and corresponding constraints are chosen, see figure 3 b,
where also the FEM mesh of 4-node plate elements is shown.

The minimum acceptable element reliability index is chosen as pr in = 4.0.

Reliability Analysis

The reliability indices Pi in the constraints are calculated with a failure function modelling yielding
failure by the von Mises yield criterion in a plane stress problem. The failure function is written:

(16)

where Z is a model uncertainty variable, Sp the yield stress and u"',u. and T"" the stress components
in u.
The poisson ratio v and the E-modulus are modelled as stochastic fields. Both processes are
assumed to be homogeneous and log-normally distributed. The autocorrelation coefficient functions
are calculated using

(17)

where mi"" mi, are the coordinates to the centroid in stochastic element i and Ly the correlation
length of the process.

The correlation lengths are assumed to be LE = 500 mm and L" = 250 mm. Further it is assumed
that the correlation between the two fields at the same point PII,E, = 0.7 and at two different points
p",Ej = Jp",E,P"jEj JPE,Ej P""'i" All other variables are modelled as uncorrelated.

From the above correlation lengths and the mesh rules in section 3.1 the stochastic finite element
mesh in figure 4 is obtained and used for both processes.
83

Figure 4. Stochastic finite element mesh. (Thin lines FEM, thick lines SFEM)

From figure 4 it is seen that the ordinary finite element mesh is divided into 4 stochastic elements
i.e. that the discretization of the two stochastic fields by the midpoint method gives 8 stochastic
variables.
In Table 1 the statistical characteristics of all 11 basic stochastic variables are shown.

Variable Designation Distribution Expected value Coeff. of var.


EI- 4 Young's modulus LN 2.1 . 105 N/mm2 0.05
PI-4 Poisson ratio LN 0.30 0.1
Z Model uncertainty N 1.0 0.1
8, Yield stress LN 287 N/mm2 0.1
p Load N 1.3.105 N 0.1
Table 1: Statistic characteristics. ( N : Normal, LN : Lognormal.)

Shape Optimization Results

The shape optimization problem is solved using the NLPQL-algorithm Schittkowski (10) and the
reliability calculations are performed using PRADSS, see Sj1jrensen (12). The initial and optimal
,8-values of the constraints and object function value W are seen in table 2 and the optimal
geometrical shape is seen in figure 5. The computer time was approximately 1 CPU-hour on a
VAX 8700.

Initial Optimum
fJI 4.31 4.00
fh 5.92 6.35
fJ3 8.75 7.09
fJ4 7.73 5.04
fJs 4.71 4.00
W 2.72.104 mm2 2.54.104 mm2

Table 2. Results of shape optimization.


84

Figure 5. Optimal geometry.

Design Evaluation

As mentioned above the assignment of constraints to critical points is essential in shape optimiza-
tion. In this analysis the assignment is evaluated by a reliability analysis of the final optimal
solution with a failure element placed in all finite element nodes. The results of this analysis were
that none of these element reliability indices is below 4.0 i.e. the assignment of constraints in figure
3 b is acceptable.

It is also important to perform a sensitivity analysis of the optimal design. A more detailed
sensitivity of the design due to changes in parameters in the structural reliability model and other
parameters in the model can be calculated according to the methods outlined in S!1Srensen &
Enevoldsen [111. Here only the sensitivities of the reliability indices in the two active constraints
with respect to the basic stochastic variables are calculated.

The solution to (4) is called U* and is written U* = (3 Q*. The elements in the Q* vector are
measures of the relative importance of the stochastic variables (the cr·-parameters of correlated
variables are combined in a euclidian norm and considered as one parameter). The cr* -parameters
are shown in table 3.

Variable Constraint 1 Constraint 5


EI - 4 ,1I1-4 0.21 0.19
Z 0.01 0.01
Sy 0.73 0.73
P 0.65 0.66

Table 3. cr* -parameters as sensitivity measures.

From table 3 it is seen that the stochastic variables of the E-modulus and the Poisson ratio is of a
certain significance but not dominating in this example.
85

Finally the ,B-values of the constraints are calculated using 3 x 3 stochastic elements instead of 2 x
2, see figure 4. Not surprising (due to the stochastic mesh generating rules and the low sensitivity
indicated in table 3 of the variables in the stochastic elements) this reliability analysis results in
nearly the same results as shown in table 2.

Example 2: Shape Optimization of a Plate With a Central Hole

A quarter of a square plate with distributed edge loads and a central hole is considered, see figure
6. P

40

60

20
hI
a (thickness 10 mm)
Figure 6. Geometrical model of a plate with a central hole (measurements in mm).

When the material properties are modelled as stochastic fields the double symmetry of the plate
cannot exist. Nevertheless symmetry is assumed because of computing time considerations.

Shape Optimization Model

The optimization problem is formulated as in the previous example. The geometry of the hole
is optimized with two models, model 1) shown in figure 7 a with 2 optimization variables in to
2 master nodes and a geometry limited to an ellipse and model 2) shown in figure 7 b with 7
optimization variables in 7 master nodes without limitations in the geometry.

Model 1) is formulated based on the analytical deterministic solution of the optimization problem
(weight minimization with stress constraints) being an ellipse with the same ratio between the
principal axis (a/b) as between the edge loads, i.e. a/b = 1.5 at optimum in this case, see Braibant
& Fleury [2J.

In this problem it is easy to assign the constraints because the critical stress concentrations will
be at the edge of the hole. Hence 7 element reliability constraints are assigned to the 7 hole edge
nodes in the models with ,B'!'in = 4.0.
86

1 1
Figure 7 Optimization models: a) model 1 with 2 optimization variables and b) model 2 with 7
optimization variables.

Reliability Analysis

The reliability analysis is performed as described in example 1. The correlation lengths are assumed
to be LE = 200 mm and L" = 100 mm. The load P is assumed to be normally distributed with
38.103 N as the expected value and 0.1 as the coefficient of variation. The finite element mesh is
divided into a stochastic finite element mesh with 4 elements used for both stochastic fields, see
figure 8.

Figure 8. Stochastic finite element mesh. (Thin lines FEM, thick lines SFEM)

Shape Optimization Results

The results of the shape optimization is seen in table 4 and the optimal geometry of the hole in
figure 9.
87

Initial Opt. Mod. 1 Opt. Mod. 2


PI 9.43 4.04 4.00
P2 0.70 4.09 4.00
P3 8.59 4.12 4:00
P4. 6.05 4.00 4.00
PI) 3.70 4.00 4.00
P6 2.01 4.05 4.00
P7 1.03 4.09 4.00
Zl 20.0 mm 20.3 mm 21.1 mm
Z2 20.0 mm 13.6 mm 14.1 mm
W 3.33.10" mm2 3.39.104 mm 2 3.37.104. mm2

Table 4. Results of shape optimization.

-----------------+-

---Modell

·············ModeI2

Figure 9. Optimal hole geometry.

Not surprisingly it is seen that the largest hole is obtained using model 2 and that the optimal
hole in the reliability-based optimization using stochastic finite element methods is not exactly an
ellipse. The relation between the axes a and b is after all nearly the same as in the analytical
deterministic solution.

6. Conclusions

Reliability-based shape optimization problems are formulated with requirements on element reli-
ability measures. The Structures are modelled by stochastic variables and stochastic fields. The
element reliability indices are determined based on a discretization of the stochastic fields using
the stochastic finite element method with the midpoint method.

Further it is shown how sensitivity analysis in both the stochastic finite element response calcula-
tions and in the reliability calculations can be performed in a numerically efficient manner.
88

A flow-chart of a reliability-based shape optimization program using stochastic finite elements is


shown. The program is used in two examples: 1) Shape optimization of a corbel and 2) Shape
optimization of a hole in a plate. Both examples show how the described techniques works.

7. References

[1) Bathe K.-J. : Finite Element Procedures in Engineering Analysis. Prentice-Hall, 1982.
[2) Braibant V. & C. Fleury: An Approzimation-Concepts Approach to Shape Optimal Design. Com-
puter Methods in Applied Mechanics and Engineering, Vol. 53 pp. 119-148, 1985.
[3) Enevoldsen, Ib, J.D. Sl!Irensen & P. Thoft-Christensen: Shape Optimization of Mono-Tower Off-
shore Platform. Presented at OPTI 89 Conference on "Computer Aided Optimum Design of
Structures", Southampton, 1989.
[4) Frangopol, D. M.: Sensitivity of Reliability-Based Optimum Design. ASCE, Journal of Structural
Engineering, Vol. 111, No.8, 1985.
[5) Gill, P. E., W. Murray & M. H. Wright: Practical Optimization. Academic Press, Inc. 1981.
[6) Der Kiureghian, A. & J.-B. Ke: The Stochastic Finite Element Method in Structural Reliability.
Probabilistic Engineering Mechanics, Vol. 3. No.2. 1988.
[7) Liu, P.-L. & A. Der Kiureghian: Finite Element Reliability Methods For Geometrically Nonlinear
Stochastic Structures. Report No. UCB/SEMM-89/05 Dept. of Civ. Eng. Berkeley, California.
1989.
[8) Madsen, H. 0., S. Krenk & N. C. Lind: Methods of Structural Safety. Prentice Hall, 1986.
[9) Murotsu,Y., M. Kishi, H. Okada, M. Yonezawa & K. Taguchi : Probabilistically Optimum Design
of frame Structure. 11 tho IFIP Conf. on System Modelling and Optimization, Springer-Verlag.
pp 545-554, 1984
[10) Schittkowski, K.: NLPQL: A FORTRAN Subroutine Solving Constrained Non-Linear Programming
Problems. Annals of Operations Research, 1986.
[11) Sl!Irensen, J. D. & Ib Enevoldsen: Sensitivity Analysis In Reliability-Based Shape Optimization.
Presented at the NATO Advanced Study Institute on "Optimization and Decision Support
Systems in Civil Engineering", Edinburgh. 1989.
[12) Sl!Irensen, J. D.: PRADSS: Program for Reliability Analysis and Design of Structural Systems.
Structural Reliability Theory, Paper No. 36, The University of Aalborg, Denmark, 1987.
[13) Sl!Irensen, J. D. : Reliability-Based Optimization of Structural Systems. 13th IFIP Conference on
"System Modelling and Optimization", Tokyo, Japan, 1987.
[14) Sl!Irensen, J. D.: Probabilistic Design of Offshore Structural Systems. Proc. 5 th ASCE Spec.
Conf. pp 189-193, Virginia, 1988.
[15) Vanmarcke, E. : Random Fields: Analysis and Synthesis. MIT Press, 1983.
CALIBRATION OF SEISMIC RELIABILITY MODELS

Luis Esteva, Professor and Director


Institute of Engineering, National University of Mexico
Mexico D. F., 04510 Mexico

SUMMARY

The calibration of theoretical seismic reliability models with respect to


observations about response, damage and failure of structures subjected to earth-
quakes is presented. Reliability is described in terms of mean and variance of
the maximum value reached during an earthquake by a behavior index such that a val-
ue greater than unity means failure. Qualitative observations about behavior of
structures are transformed into conditional probability distributions of the behav-
ior indeces of different structures, and a bayesian formulation is adopted for up-
dating prior distributions of means and variances of those indeces in the light of
uncertain observations about values taken by them in real structures responding to
earthquakes.

I N T ROD U C T ION

Writers of seismic design codes often face the problem of establishing a com-
bination of design response spectra, nominal loads, resistances, load factors and
strength reduction factors, such that the structures that result from their use are
characterized by "adequate" safety levels, in the sense that either failure proba-
bilities are sufficiently small or they correspond to an optimum balance between
expected consequences and construction and maintenance costs. Significant research
efforts have been devoted to improving the criteria and tools for seismic hazard
assessment; however, very little attention has been paid to the estimation of the
reliability of complex nonlinear systems, designed in accordance with given rules,
subjected to earthquakes of given intensities. These studies (Esteva and Ruiz,
1989; Esteva et aI, 1989) have produced criteria and algorithms for computing
reliabilities, which open the door to systematic studies of the problem, but their
usefulness in practice is still limited, because of the sensitivity of their
resul ts to the force-deformation properties of the structural members and to the
conditions assumed to give place to system failure. In addition, the computational
effort required by those algorithms precludes their applicability to parametric
studies dealing with detailed models of complex systems.

In an attempt to cope with the foregoing problems, this paper presents


criteria for the calibration of reliabilities resulting either from two alternate
90

models with different degrees of refinement or from the predictions of one model
combined with observations on families of real structures. The final objective of
the approach proposed is that of developing a set of criteria and methods suitable
for use by engineers responsible of seismic safety standards, and providing them
the tools for a) computing reliabilities of structures belonging to specified
types or families, and b) transforming those reliabilities into values representa-
tive of those inferred from the calibration of computed reliabilities with damage
and failure rates observed in actual structures subjected to earthquakes.

ON EQUIVALENT MODELS

Our problem is formulated as follows: given a complex structural system to be


projected and built in accordance with specified design, construction and quality
control criteria and methods, define a simplified model of the system, adequate for
estimating probability distributions of the variables that best describe its re-
sponse and performance to earthquakes considered as stochastic processes. The
first step towards the solution consists in choosing the response and performance
variables of the complex (or detailed) model of the system to be estimated by means
of the simplified model. A similar decision must be made concerning the global
properties of the system to be represented and, then, about the transformation
rules tying the properties of the elements of both systems, including the corre-
sponding uncertainties. Finally, transformation rules between the (probabilistic)
predictions of both models must be obtained. These rules may be applied to extend
to the detailed models the results of parametric studies carried out wi th the
simplified ones.

The wider the variety of


the characteristics of detail-
ed systems intended to be cov-
ered by the same family of
simplified models, the more
complex will be the transfor-
mation rules of system proper-
-" "" " . " ""~
ties, and the wider will be a) Type A b) Types B1 and B2

the uncertainties tied to the


predictive capability of the
simplified system. Therefore,
in this paper we restrict the
scope of our studies to the
family of reinforced concrete
building frames with story d)TypeD
stiffnesses varying (approxi- c) Types C1 and C2

mately) linearly along the Fill 1 Structural models with different degrees of refinement
91

building height. In this stage we also ignore stiffness-and strength-degradation,


P-delta effects and soil-structure interaction. These systems may be represented by
means of models with different degrees of refinement, as shown by fig. 1, the
notation of which will be used throughout this paper. Model type A consists of a
large number of structural members; models type B concentrate beam and column
strengths and stiffnesses in a smaller number of members, arranged into simplified
frames; models type C are shear systems, while type D is a cantilever with a
flexibility capable of representing the deformability related to both story shears
and axial forces in columns, and with a lateral strength that accounts for the
interaction between yield moments and axial loads in the columns of the detailed
system.

Properties related to linear response of eguivalent deterministic systems

We first consider the case when a type D equivalent model is adopted. If the
linear response of the original system is replaced with the contribution of its
fundamental mode of vibration, the dynamic response of the equivalent system has to
satisfy the following equation

• 2
X + 2l;;px + P x -(Xx (ll
o

where x is the relative displacement of the mass with respect to the ground, l;; is
the fraction of critical damping, p and (X are respectively the frecuency and the
participation factor of the fundamental mode and x o
is the ground acceleration. p
and (X are related to K, M and Z (the stiffness matrix, the mass matrix and the
configuration of the fundamental mode) in accordance with well known expressions.

When dealing with a model type B1 (fig. 1b), the equivalence conditions are
a) the values of EI and EA (E = Young's modUlUS, A = cross-section area, I = moment
of inertia) of each column of the simplified system are equal to half the sum of
the corresponding values for all the columns of the original system at the same
story, b) the value of EIIL of each beam (L span) is equal to the sum of the
values for the beams at the same level and c) at each beam-column joint of the
simplified system is located a mass equal to half the total of the original system
at the same level.

In the cases of building frames where the most significant part of the re-
sponse arises from the fundamental vibration mode, and where the contribution of
the axial deformations of columns may be disregarded, it may be advantageous to
introduce simplifications that permit to define i1taIuj ~ (ratios of shear
to lateral deformation), which are independent of the along-height distribution of
lateral forces, instead of working with the full stiffness matrix (Rosenblueth and
Esteva, 1962). This simplifies the evaluation of the modal stiffness matrix, k.
92

Properties related to linear response of equivalent uncertain systems

In order to obtain first and second moments of the distributions of p and a


(eq. 1), it is necessary to count with information about the vectors of expected
values and the covariance matrices of the elements of M and K. For the case of M,
those values are given at the outset, while the values corresponding to K have to
be derived from the probabilistic information relative to the stiffnesses of the
individual members.

First-order estimates of the mean and variance of the fundamental natural fre-
quency (or natural period) can be obtained by the standard theory (Benjamin and
Cornell, 1970), starting from the equation T = znv'iii7k, where m = ZTMZ and k =
ZTKZ. For this purpose, it is required to count with means and variances of m and
k:

E(k) = ~~Z Z E(K ) (Za)


Ij I j Ij
var (k) = ~~~~Z Z Z Z cov(K ,K ) (Zb)
Ijmn I j m n Ij mn
E(m) = ~Z2E(M ) (Zc)
I I I
var (m) = ~~Z2Z2 cov(M , M ) (Zd)
Ij I j I j

In these equations, ZI' MI and KI j are elements of matrices Z, M and K. In


their derivation it has been assumed that the uncertainty associated to the modal
configuration can be neglected. This assumption was adopted on the grounds of the
well known stationarity property of the ratio ZTKZ/ZTMZ with respect to Z near its
correct value (Crandall, 1956). The values of ZI to be used in eqs. Z a-d may be
those computed on the basis of the expected values of the elements of M and K.
Equations simpler than these may be derived if one works with story stiffnesses,
rather than with matrix K. If 5 is the vector of story stiffnesses 51' and 0 the
vector of modal story deformations, 0 1, one obtains, instead of eqs. Za, b:

E(k) = ~E(5 )0 2 (3a)


I I I
var (k) = ~~0202
Ij I j
cov (51' 5 j ) (3b)

where E(5 1 ) and cov (51' 5 j ) are easily obtained


by applying Benjamin and Cornell approximations
to the expressions available for calculation of
story stiffnesses.

Equivalent nonlinear systems

For illustration purposes, the case where


the original system is a building frame capable
of developing perfectly ductile plastic hinges at Fig 2 Yielding mechanism
93
its members' edges and the equivalent one is an elasto-plastic single degree of
freedom system is discussed in the following. If, in addition, it is assumed that
the frame was designed in accordance with the strong-column-weak-beam principle,
the yielding mechanism may be idealized as shown in fig. 2. Because the simplified
system represents the response in the fundamental mode, its yielding deformation,
Xy' is related to the lateral load configuration that produces the yielding
mechanism of the original system as follows:

pP x KZ (4)
y y

Here, P is the configuration (at an arbitrary scale) of the vector of inertia


forces associated to the modal configuration Z, and Py is a scaling factor corre-
sponding to the yielding condition. Premultiplying both members of eq. 4 by ZT,
and applying the virtual work principle to the mechanism of fig. 2, the following
result is obtained:

(5)

where Rand e are respectively the vectors of yielding moments of the plastic
hinges and of their angular deformations, and Wand 0 are respectively the vectors
of vertical loads acting on the beam centers and of their effective displacements.
Solving for p and obtaining its mean and variance is straightforward.

CALIBRATION OF RELIABILITIES COMPUTED WITH ALTERNATE MODELS

Let us consider two alternate models used to estimate the dynamic response and
performance (damage level) of a structural system subjected to seismic ground
motion. Both models differ in the degree of detail and accuracy with which the
corresponding real system is represented. The geometrical and mechanical proper-
ties are obtained in accordance with previously established rules from the proper-
ties of a system intended to represent the real one as closely as possible, within
practical conditions and the state of the art.

Let us designate by A and S the models representing the real system in detail-
ed and simplified manner, respectively. In general, it will be of interest to
obtain rules transforming the predictions of model S into those that would by fur-
nished by model A. In our formulation, seismic ground motion is defined as a
stochastic process, and the properties of systems A and S may be either uncertain
or deterministically known. The behavior of these systems will be defined by the
~ ~ QA and Qs ' such that if any of them exceeds its critical value
(unity) the corresponding system will fail. Because of our previous assumptions,
for a system designed and built in accordance with given specifications and sub-
jected to an earthquake of a given intensity, QA and Qs will be random; i f the
94

forms, but not the parameters, of their probability density functions are assumed,
the problem to be solved is that of obtaining the transformation rules linking
those parameters. Those rules may vary with the ground motion intensity or may be
independent from it.

Concentrating our attention on the most general case, let us denote by EA and
E the vectors of parameters that define respectively the probability density func-
s
tions of QA and Qs ' and let us assume that they are related through the transforma-
tion matrix T, such that EA TEs' where all these matrices may be expressed as
functions of Y, the earthquake intensity (or vector of ground motion characteris-
tics including, in addition, frequency content, duration, etc.) normalized with
respect to the nominal seismic resistance, and H, a vector of parameters to be
estimated.

In this paper it is assumed that the samples of values of QA and Qs are gener-
ated by Monte Carlo simulation. If these samples are sufficiently large, their
probability density functions may be obtained in accordance with the methods of
classical statistical theory, their parameters EA and Es may be estimated within
narrow uncertainty margins in terms of Y and, under these circumstances, T will be
readily determined. If, as will often be the case, the sample of values of Q is
s
large, but that corresponding to QAis small, we will have to resort to small-sample
estimation theory: according to Bayes' theorem,

(6)

where K is a normalizing constant, f~ and f~ are respectively prior and posterior


probability density functions of the parameters H; A is the event described by the
observed sample of values of QA and Qs ' and p(Alh) is the conditional probability
of occurrence of event A given H = h. In order to compute this probability, EA must
be obtained for H = h through the transformation EA(Y) T(Y,h)Es(Y)' where the
second factor in the second member is assumed known.

In some instances the observations about QA include one or several uncertain


values. For instance, if P-delta effects are taken into account when computing the
response of elastoplastic systems, the envelope of the force-deformation curve has
a negative slope, and at some story the computed deformation may reach the value
where the mentioned envelope intersects the horizontal axis. In that case, the
information supplied by the response analysis will not be a value of QA but, in-
stead, the interval QA > ~, where ~ is the critical value of QA' Then, p(Alh) will
be given by the following equation:

N N
p(Alh) TI?f (q Ih) IIp(Q >~ I h) (7)
1=1 QA 1 J=1 AJ AJ
95

Here, N1 and NO! are respectively the numbers of cases for which model A
produces point values and interval information about QA; ql' i = 1, ... , N1 are the
calculated values of QA; QAj is the behavior index associated to the j-th realiza-
tion of model A for which failure occurs, and ~Aj the corresponding critical
value.

CALIBRATION OF MODELS WITH REALITY

This process is conceptually similar to calibrating reliabili ties resulting


from two alternate models of the same system: it would suffice to replace model A
with the population of real structures belonging to a given family. There appear,
however, significant differences, which demand the solution of additional problems:
the information about performance of real systems may arise from more than one
earthquake, refers to more than one system and may be heterogeneous; that is, it
may consist of several groups of "observed" (in most cases, d.erUA9.ed) values of Q,
each group characterized by its own degree of uncertainty. For instance, observa-
tions about structural behavior during a given earthquake may include values of
maximum story deformations, qualitative or quantitative descriptions of damage, its
cost, numbers of collapsed structures, etc. Before this information can be applied
in a context similar to that of eqs. 6 and 7, it must be arranged into homogeneous
sets, each corresponding to a family of structures for which the same transforma-
tion matrix T applies. Once this is done, the uncertain information about struc-
tural performance must be transformed into probability density functions of the
values of QA. Then, the counterparts of eqs. 6 and 7 may be applied. For instance,
if gQAt is the probability density function assigned to the value of QA that
occurred in the i-th of N systems knowing that the corresponding damage was d 1 , we
obtain

f"(h)
H
= K/(h) ~
H t =1 JfQAt (qlh)gQAt (qld )dq
i
(8)

where fQAt is the probability density function of QA for the i-th system, given H
h. For a structure that fails the corresponding integral in the second member of
eq. 8 equals Joof (qlh)dq.
1 QAl

When selecting the structures to be included in the sample considered by eq.


8, it must be kept in mind that the magnitudes of the resulting uncertainties about
parameters H are very sensitive to the sample size and to the range of values of Y
covered.

The approach represented by eq. 8 is based on the specific information about


the properties of each structure and the observations about its performance. For
practical reasons, N will not be large. A complementary approach is also of inter-
est, where global statistical information about properties and performance of a
96

population of structures is available. In the ideal case it may be assumed that


the distribution of damage throughout the population can be estimated from the
observations by means of conventional statistical theory. This distribution is
subsequently used to obtain that of QA' which implies that Es will be determin-
istically known and, therefore, that T(Y) may be obtained simply by solving the
matrix equation E (Y)
A
= T(Y)E (Y). Eliminating the uncertainty tied to T does not
s
necessarily imply reducing that related to the predicted values of QA' because the
dispersions of the values of Q resulting from the variety of cases included in the
s
population considered may be too large.

A situation likely to be encountered in practice is that in which a large por-


tion of the information relevant to update f~ (h) consists of the description of
damage in a large number of imprecisely described structures of the same type as
those that constitute the sample of theoretical models studied. In this case we
will start, as before, from a) the bayesian distributions of E (y) resulting from
s
the reliability analysis of the theoretical models, and b) the description of the
damage experienced by the real systems belonging to the set studied; but now the
properties of each member of any of these two sets are described by a small number
of properties, R. (For typical office or apartment buildings, R may include, for
instance, base shear capacity, natural period, variation of strength and stiffness
along the height, average ratios between stiffnesses or strengths of beams and
columns, etc.) Eq. 8 would apply also in this case, but now the parameters H of
the transformation matrices T would have to be expressed as functions of the vector
R of properties of the system considered. As before, the problem to solve is that
of obtaining a bayesian distribution of H(R). The posterior bayesian distribution
of H, and hence of EA , would now be a function of R.

In the particular case studied in detail in this paper it is assumed that all
available observations about behavior of actual structures correspond to the same
earthquakes or to earthquakes of nearly the same intensity. Accordingly, we will
not be concerned with the variation of T with respect to intensity. More precise-
ly, we shall assume that T is formed by the set of unknown parameters H =T and
1 m
T such that
v

(9a,b)

where E(o) stands for expectation and V for variation coefficient.

Example

It is assumed that an algorithm similar to that described by Esteva and Ruiz


(1989) is applied to ten structures assumed to be subjected to members of a family
of strong motion records with specified statistical properties, as a result of
which it is concluded that in all cases Q has lognormal probability distribution,
s
97

with the parameters given in the second and third columns of Table 1. The last
column of this table displays the damage levels experienced by those structures,
estimated by experts on the basis of Table 2, under the action of an earthquake
belonging to the family mentioned above. The prior probability distribution of (Hi'
H2 ) is taken as discrete and uniform at the points corresponding to all the pairs
of T 0.2, 0.5, 1.0, 2.0, 5.0 and T = 0.5, 1.0, 2.0.
m

TABLE 1 RELIABILITY INDECES TABLE 2. DAMAGE LEVELS

Structure E(Q ) V D LEVEL DESCRIPTION


s Qs
1 0.30 0.25 0.15
0 NULL
2 0.50 0.30 0.60
0.05 NEGLiGIBLE
3 0.90 0.35 0.85 10.15 LIGHT
4 1. 50 0.35 0.85 10.30 MODERATE
5 0.80 0.30 0.85
0.60 CONSIDERABLE
6 1. 05 0.25 0.95
0.85 SEVERE
7 0.80 0.30 0.95
0.95 EXTREME
8 1.00 0.30 0.95
1.0 TOTAL (collapse)
9 1.20 0.35 1. 00
10 1.00 0.35 1.00

On the basis of field and laboratory experience, the conditional distribution


of Q in a structure given the observed damage D in eq. 8) is taken as
lognormal, with mean value E(Q) DO. 42 and variation coefficient VQ
0.5(l_D)1.75.

For each pair of values of T and T the integrals in the second member of eq.
m y

8 are evaluated numerically for all the structures studied. For this purpose,
fQA1is taken as lognormal, with the parameters given by eqs. 9a and b. The poste-
rior probability distribution of T m
and Ty is the matrix given in Table 3. From
this information, the means of
TABLE 3. POSTERIOR DISTRIBUTION OF
T and Ty are 1.01 and 0.905, (T , T). VALUES OF P"
m m y I j
and their variation coefficients
Ty
T
0.099 and
0.3192, respectively. m
0.5 1.0 2.0
Also, E(T m2 Ty2 ) = 1.0225.
0.2 0 0 0
0.5 0 0.410xl0- i5 4.74 xl0- at
Consider now a new structure -2
1.0 0.24 0.724 2.067xl0
-6 -2
represented by a model of the 2.0 0 0.301xl0 1. 013xl0
5.0 0 0 8.562xl0- i2
type adopted for the calibration
presented above. Let Ei (Q) and var i Q be respectively the theoretical mean and
variance of Q (resulting from a conventional seismic reliability analysis). Then,
the marginal mean and variance of Qi accounting for the bayesian uncertainty about
T and T, can be obtained as follows (Parzen, 1962):
m y

E(Q) = E(E(QITm , Ty) (lOa)

var Q = E(var(QITm, Ty) + var (E(QITm, Ty) (lOb)


98
In the case under study, E(Q) = TmEl (Q) and Val' Q var 1 Q; therefore,
the posterior marginal mean and variance of Q are:

E" (Q) = E" (T )E (Q) = 1.01 E (Q) (lla)


m 1 1

val''' (Q) = var"TmE1 2(Q) + E" (Tm2T v2) var Q (llb)


1

CONCLUDING REMARKS

The formulation presented permits obtaining probability distributions of


factors transforming means and variation coefficients of the behavior indeces given
by theoretical models for the analysis of seismic reliability into values which are
consistent with observations about performance of structures subjected to
earthquakes.

Successful applications of this approach rely on the consistency of subjective


estimates of the conditional probability distribution of the behavior index given
qualitative descriptions of damage level. This is a line where significant
research efforts are justified.

REF ERE N C E S

Benjamin, J.R., and Cornell, C.A. (1970), "1>~, 9'~ and


Can- rgWU l?nq.Ln.eerul.", Mc Graw-Hi 11, New York

Crandall, S.H. (1956), "l?~ ~", Mc Graw-Hill, New York

Esteva, L., Diaz, 0., Mendoza, E., and Quiroz, N. (1989), "Reliability Based
Design Requirements for Foundations of Buildings Subjected to Earthquakes",
1>rwc. San
Francisco

Esteva, L., and Ruiz, S.E. (1989), "Seismic Failure Rates of Multistory Frames" ,
&rgl? :Jaurtna1 at the 9't.ru.Lctwza.e 7)~, 115, 2, 268-284

Rosenblueth, E., and Esteva, L. (1962), "7)u.ena 9'UurUca de l?~", Ediciones


Ingenieria, Mexico, City
COMPUTATIONAL EXPERIENCE WITH VECTOR
OPTIMIZATION TECHNIQUES FOR STRUCTURAL SYSTEMS

Dan M. Frangopol* & Marek Klisinski**


*Department of Civil, Environmental and Architectural Engineering
University of Colorado at Boulder, Boulder, Colorado 80309-0428, USA
**Kraviecka 8m, 91-818 Lodz
Formerly, Dept. of Civil, Environmental and Architectural Engineering
University of Colorado at Boulder, Boulder, Colorado 80309-0428, USA

1. INTRODUCTION

Single objective optimization has been the basic approach in most previous work on the design
of structural systems. The purpose was to seek optimal values of design variables which minimize
or maximize a specific single quantity termed objective function, while satisfying a variety of
behavioral and geometrical conditions, termed constraints. In this definition of structural opti-
mization, the quality of a structural system is evaluated using one criterion (e.g., total cost, total
weight, system reliability, system serviceability). In a recent survey, Levy and Lev [8] presented
a state-of-the-art review in the area of structural optimization. They stress developments in the
field of single objective optimization and acknowledge that in many design problems engineers
are confronted with alternatives that are conflicting in nature. For these problems, where the
quality of a design is evaluated using several competing criteria, vector optimization (also called
multiobjective, multicriterion or Pareto optimization) should be used.

Vector optimization of structural systems is an important idea that has only recently been
brought to the attention of the structural optimization community by Duckstein [1], Koski [7],
Osyczka [10], Murthy et al. [9], Frangopol and Klisinski [4], and Fu and Frangopol [6], among
others. It was shown that there are many structural design situations in which several conflicting
objectives should be considered. For example, a structural system is expected to be designed
such as both its total weight and maximum displacements be minimized. In such a situation, the
designer's goal is to minimize not a single objective but several objectives simultaneously.

The main difference between single and multiobjective (vector) optimization is that the latter
almost always gives not a single solution but a set of solutions. These solutions are called Pareto-
optimum, non-inferior or non-dominated solutions. If a point belongs to a Pareto set there is no
100

way of improving any objective without worsening at least another one. The advantage of the
vector optimization is that it allows to choose between different results and to find possibly the
best compromise. It requires, however, a considerable computation effort. In this paper, some of
the computational experience with vector optimization techniques for structural systems gained
recently at the University of Colorado, Boulder, is presented.

In the first part of this paper the mathematical formulation of vector optimization is reviewed
and three methods to calculate Pareto-optimum solutions are discussed. The second part describes
a vector optimization problem consisting of minimizing both volume and displacements of a given
structural system having elastic or elastic-perfectly plastic behavior. The third part of the work
contains illustrations of vector optimization of a 3-D truss system under single and multiple
loading conditions, followed by a discussion on the computational experience and the conclusions.

2. VECTOR OPTIMIZATION: MATHEMATICAL FORMULATION AND SOLU-


TION METHODS

2.1 Mathematical formulation [4]

A vector optimization problem can be formulated in the following way [7]:

min f(x) (1)

where XEO and f: 0 ..... Rm is a vector objective function given by

f(x) = [!1(x) , l2(x), ... , fm(xW (2)

Each component of this vector describes a single criterion

!t:O ..... R i = 1,2, ... , m (3)

The design variable vector x belongs to the feasible set 0 defined by equality and inequality
constraints as follows
o = (x ERn : h(x) = 0, g(X) ~ 0) (4)
The image of the feasible set 0 in the objective function space is denoted by f (0).

IT the components of the objective vector f(x) are not fully independent there is not usually
a unique point at which all the objective functions reach their minima simultaneously. As pre-
viously mentioned, the solution of a vector optimization problem is called Pareto-optimum or
non-dominated solution. Two types of non-dominated solutions can be defined as follows [1]:

(a) a point XO EO is a weakly non-dominated solution if and only if there is no x EO such that
101

f(x) < f(Xo) (5)

(b) a point Xo is a strongly non-dominated solution if there is no x E 0 such that

f(x) ~ f(Xo) (6)


and for at least one component i

i = 1,2, ... , m (7)

IT the solution is strongly non-dominated it is also weakly non-dominated. Such a strongly


non-dominated solution is also called Pareto-optimum. In other words, the above definitions state
that if the solution is strongly non-dominated, no one of the objective functions can be decreased
without causing a simultaneous increase in at least one other objective.

The main task of vector optimization is to find the set of strongly non-dominated solutions
(also called Pareto solutions or minimal curve) in the objective function space and the corre-
sponding values of the design variables (Pareto optimal curve) in the feasible region space.

2.2 Solution methods

There are a number of vector optimization solution techniques described in the literature (see
Duckstein [4], Koski [7], Osyczka [10], among others). Not all of them are, however, suitable for
structural optimization. Three solution methods were investigated at the University of Colorado,
Boulder: the weighting method, the minimax method and the constraint method. These methods
are widely discussed in [1], [7] and [10], among others.

The basic idea of the weighting method is to define the objective function F as a scalar product
of the weight vector wand the objective function vector f

F = w . f (8)

Without loss of generality the vector w can be normalized. The Pareto optimal set can be
theoretically obtained by varying the weight vector w. In the objective function space the single
objective function F is linear. For a fixed vector w, the optimization process results in a point
at which a hyperplane representing the single objective function is tangential to the image of the
feasible set f(O). Only if the set f(O) is convex is the weighting method able to generate all
the strongly non-dominated solutions. The second drawback of this technique is the difficulty
involved in choosing the proper weight factors. Since the shape of the image of the feasible set
102

f(O) is generally unknown it is almost impossible to predict where the solution will be located.
Sometimes the problem is also very sensitive to the variation of weights. In such a case the
weighting approach can prove to be unsatisfactory.

Another method for solving vector optimization problems is the minimax method described in
[7[. This method introduces distance norms into the objective function space. In this method the
reference point from which distances are measured is the so-called ideal solution. This solution
can be described by the vector
(9)
where all its components are obtained as the solutions of m single objective optimization problems

It = minx.n li(x) (10)

Generally, the ideal solution is not feasible, so it does not belong to the set f(O).

In the minimax method the norm is defined as follows:

maxi Wi (Ii - It) i = 1,2,···,m (11)

where the vectors wand f were previously defined. For a given vector w, the norm (11) cor-
responds to the search in a direction of some line starting from the ideal solution. If the other
norms are used, this approach generates the entire family of non-dominated solutions.

The minimax approach eliminates one drawback of the weighting method, because it is also
suitable for non-convex problems. It may be, however, also sensitive to the values of the weight
factors. The possibility of the prediction where the solution will be located is improved. To
use this method it is necessary to know first the ideal solution, what calls for solving m scalar
optimization problems: it is the price to pay for using this method.

Another alternative to the previous methods is the c-constraint method. The main idea of
this method is to convert m-l objective functions into constraints. This can be obtained by the
assumption that the values below some assumed levels for all these functions are satisfactory.
Without loss of generality it may be assumed that the components 12,/3,···, 1m of the objective
function vector will be constrained and only 11 will be minimized.
Let us define the concept of this method in mathematical terms. The original problem (1-4)
is replaced by
(12)
where
{x Ii SCi, 2, m} (13)
103

and
(14)
The entire Pareto set can be obtained by varying the E: vector. The constraint method applies
to non-convex problems and does not require any additional computations. It may be treated as
basic numerical technique in vector optimization.

There are also many other multiobjective optimization methods (see, ego [1]), but they are
less suitable for our purposes.

3. VECTOR OPTIMIZATION PROBLEM

The described mathematical methods have been applied to minimize both the volume and
displacements for a given structure and assumed material behavior. Therefore, the objective
function vector has two major components

f = [V, d]T (15)

where V denotes the volume of the structure and d a displacement vector. Usually the vector d
contains maximum displacements of all nodes in two perpendicular directions

(16)

but it can also contain displacements of specified nodes

(17)

The optimization variable vector x includes cross-section areas of groups of the elements ai' The
allowable areas are limited by the minimum and maximum values

(18)

The material has to satisfy a constitutive relation and in case of the elastic behavior, stress
constraints
(19)
where q- and q+ are allowable compression and tension stresses, respectively.

Two types of material behavior have been considered: elastic and elastic-perfectly plastic. The
nonli.1,ear material behavior imposes more difficulties because of a larger computational effort. If
the material behavior is perfectly plastic, displacements may not be unique and, therefore, they
will be computed not for the ultimate load, but for the design load. The ultimate load is higher
104

than the design load because of the reserve strength of the structure. The ratio between these two
loads depends on the assumed value of the reserve strength factor. For reasonably constructed
structures the design displacements are elastic and, therefore, they are unique. Structures can be
optimized with respect to one or multiple load conditions, which means that in this latter case
the optimal structure has to satisfy all stress and displacement constraints for all load conditions.

All three optimization methods described in the previous section have been used to obtain
Pareto sets. In the next section, the results of the weighting method are illustrated with an
example considering a twenty-frve-bar space truss structure.

4. EXAMPLE

Three example problems (Le., a four-bar, a ten-bar, and a twenty-frve-bar truss) have been
solved by the authors to show how vector optimization can be applied to truss structures.

The material behavior was considered to be elastic in the four-bar and twenty-frve-bar truss
examples. In the ten-bar truss example the material is elastic-perfectly plastic. Two of the
examples (Le., the four-bar and twenty-frve-bar trusses) have been optimized for two load cases
and one (Le., the ten-bar truss) for a single load case. Owing to space limitations, this section is
restricted to the twenty-frve-bar space truss example.

Consider the twenty-frve-bar steel truss subjected to a single lateral load Q = 62.5 kips as
shown in Fig l(a). This structure has been optimized for minimum volume in [3]. Using the same
assumptions as in [3], except that now the behavior is assumed to be elastic (Le., -36 ksi ~ q ~
36 ksi), E= 10000 ksi and 0.lin2 ~ a; ~ 3.0in2, the vector optimization has been performed
by the weighting method. The truss members have been linked into eight groups of different
= ali A2 = a2 = as = a. = as; As = a6 = a7 = as = agi A. =
cross-sectional areas as follows: Al
alO = alli As = an = alS; A6 = a14 = al5 = al6 = al7i A7 = alS = al9 = a20 = a2li and As =
a22 = a2S = a2. = a2S, where ai = cross-sectional area of member i. The volume of the truss
together with the maximum horizontal displacement in the same direction as the applied load
Oz = D, have been considered as objectives
(20)

The results of the vector optimization are shown in Table 1, where the weight factors 1 and 2
are applied to the volume V and the horizontal displacement Oz = D, respectively. Fig. 2 shows
the image of the Pareto set in the objective function space.

Next, to extend the objective function space, one more load case (Le., load P = 40.4 kips)
has been considered as shown in Fig. l(b). In both load cases the maximum displacement in the
105

r-
100"

(a) I
~
100"

(b)

x '-c:;:"-~--ff;r-"
-=::::....::---~
y

Fig. 1 Twenty-Five-Bar Space Truss under Two Horizontal


Load Cases: (a) Load Case 1, Force Q only; (b) Load
Case 2, Force P only
106

Table 1 Vector Optimization Results Load Case 1


WEIGHT AREAS OF GROUP WEIGHT VOLUME DlSPL.
FACTORS OF ELEMENTS FACTORS
1 2. A1 A2 A3 A4 AS A6 A7 A8 1 2 V D
(in') (in') (in') (in') (in') (in') (in') (in') (in') (in)
1.00 0.00 0.798 0.819 0.670 0.205 0.100 0.100 0.275 1.306 1.00 0.00 1788.8 3.347
0.90 0.10 0.811 0.819 0.670 0.205 0.100 0.100 0.275 1.306 0.90 0.10 1789.3 3.346
0.70 0.30 1.078 1.021 0.834 0.304 0.100 0.100 0.353 1.763 0.70 0.30 2300.2 2.586
0.50 0.50 1.658 1.556 1.274 0.509 0.100 0.100 0.540 2.721 0.50 0.50 3488.4 1.694
0.30 0.70 2.539 2.375 1.945 0.803 0.100 0.100 0.825 3.000 0.30 0.70 4668.6 1.295
0.10 0.90 3.000 3.000· 3.000 0.527 0.100 2.097 1.623 3.000 0.10 0.90 7463.5 0.954
0.00 1.00 3.000 3.000 3.000 3.000 3.000 3.000 3.000 3.000 0.00 1.00 9921.6 0.878

12000.0
M

~
~
""'
;:,
....,
0 10000.0
>

8000.0

6000.0

4000.0

2000.0

0.0 -+-----.-----.----,-----,---,---,----1
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
IIOJllZONTAL D1SI'LACEMENT (ill)

Fig. 2 Pareto Solutions (i.e., Minimal Curve) in the Objective


Function Space; Load Case 1
107

Table 2 Vector Optimization Results Load Cases 1 and 2

WEIGHT AREAS OF GROUP VOLUME DISPL.


FACTORS OF ELEMENTS
1 2 3 Al A2 A3 A4 A5 A6 A7 A8 V 5, 5,
Vol 6. 6. (in') (in') (in') (in') (in') (in') (in') (in') (in·) (in) (in)
1.0 0.0 0.0 0.786 0.830 0.956 0.131 0.100 0.190 0.472 1.263 2088.8 3.0201 2.9534
0.0 1.0 0.0 3.000 3.000 3.000 3.000 3.000 3.000 3.000 3.000 9921.6 0.8783 0.7812
0.0 0.0 1.0 0.519 3.000 3.000 0.100 3.000 3.000 3.000 3.000 9300.6 1.0114 0.7812
0.8 0.1 0.1 0.784 0.857 0.944 0.103 0.100 0.228 0.466 1.286 2129.2 2.9559 2.8752
0.6 0.2 0.2 0.992 1.427 1.328 0.102 0.100 0.377 0.637 2.013 3226.7 1.9278 1.8958
0.4 0.3 0.3 1.489 2.146 1.988 0.149 0.100 0.568 0.964 3.000 4830.3 1.2875 1.2645
0.2 0.4 0.4 2.467 3.000 3.000 0.100 0.100 1.062 1.714 3.000 6675.7 1.0152 0.9115
0.7 0.2 0.1 0.932 1.145 1.000 0.141 0.100 0.263 0.459 1.702 2562.9 2.3836 2.4452
0.7 0.1 0.2 0.764 1.116 1.153 0.100 0.100 0.311 0.565 1.546 2622.1 2.4136 2.3101
0.5 0.3 0.2 1.345 1.759 1.561 0.176 0.100 0.428 0.736 2.542 3927.8 1.5636 1.5776
0.5 0.2 0.3 1.066 1.733 1.698 0.100 0.100 0.480 0.835 2.389 3968.3 1.5894 1.5241
0.5 0.4 0.1 1.587 1.740 1.456 0.313 0.100 0.325 0.654 2.713 3868.9 1.5574 1.6663
0.5 0.1 0.4 0.714 1.644 1.887 0.100 0.100 0.494 0.959 2.232 3991.6 1.6548 1.4880
0.3 0.4 0.3 2.007 2.629 2.456 0.100 0.100 0.770 1.190 3.000 5624.0 1.1366 1.0780
0.3 0.3 0.4 1.707 2.589 2.619 0.100 0.100 0.770 1.308 3.000 5735.2 1.1319 1.0467
0.3 0.5 0.2 2.279 2.636 2.320 0.127 0.100 0.756 1.082 3.000 5505.0 1.1480 1.1133
0.3 0.2 0.5 1.348 2.515 2.810 0.100 0.100 0.769 1.435 3.000 5842.0 1.1370 1.0177

3.5
"
~
...;
e; 3.0
is

2.5

2.0

1.5

1.0 '.

0.5
0.5 1.0 2.0 2.5 3.5
DlSPL. x, in

Fig. 3 Displacement Pareto Solutions; Load Cases 1 and 2


108

12000.0 -r------------------~

10000.0

8000.0

6000.0

4000.0

2000.0

0.0
0.5 1.0 1.5 2.0 2.5 3.0 3.5
(6.+ 6.)/2

Fig. 4 Pareto Solutions (i.e., Minimal Curve) in the Mean Dis-


placement - Volume Objective Space; Load Cases 1 and
2

.
R.
45

2-
""
40
r.l
~ 35
0
r..
30
INITIAL STRUCTURE
25 (VOL. = 3301.2)

20 OPTIMIZED STRUCTURE
(VOL. = 2088.8)
15

10

0
0 10 20 30 40 50 60 10

FORCE Q (kips)

Fig. 5 Load-Carrying Capacity Interaction of the Initial and


Optimized (i.e., Minimum Volume) Twenty-Five-Bar
Space Truss
109

same direction as the applied load constitutes an objective. These two displacements together
with the volume of the structure V are included in the objective vector

(21)

where bz and b~ are maximum horizontal displacements associated with load cases 1 and 2,
respectively. Table 2 shows the results of vector optimization for the two load cases for different
weight vector coefficients. As it can be seen, the combinations of these coefficients vary within
a wide range but all the non-dominated displacement solutions are very close to the diagonal
line representing the average value of the maximum displacements (Fig. 3). For this reason, the
average value of maximum displacements versus minimum volume has been plotted (Fig. 4).

In this example both load cases shown in Figs. l(a) - (b) can be considered as two extremes.
Once the load is applied from the direction of the x-axis (Fig. l(a)) and next from the direction
of the y-axis (Fig. l(b)). The load-carrying capacity interactions for the initial structure (AI =
A2 = ... = A8 = lin2) and for the optimized minimum volume truss (Le., neglecting the
displacement vector objective, see line 1 in Table 2) are plotted in Fig. 5. This figure shows what
can be expected when the twenty-five-bar space truss has to be designed for a variety of loading
situations: optimization reduces the load-carrying capacity for intermediate loading situations.

5. COMPUTATIONAL EXPERIENCE

Due to space limitations, this section contains only a brief summary of the computational
experience gained by the authors in solving vector optimization problems by using the weighting
and the c-constraint methods.

Vector optimization involves considerable computational effort. To approximate a Pareto set


it is necessary to repeat a single objective optimization many times. Therefore, the efficiency
of algorithms is very important. In our study it was not, however, the most important factor.
To improve the efficiency, it would be imperative to apply a different algorithm for each type of
structure and material behavior. The optimization program described in [2] which is based on the
Rosenbrock's algorithm [11] has been used and modified in order to perform a structural analysis
with multiple load cases and include the vectors wand c for the weighting and c-constraint
methods, respectively.

The weighting method showed all its drawbacks and the c-constraint method appeared to be
the most suitable for our purposes. If the weighting method is used, it is necessary to normalize
all the objective functions because of their different units. At the beginning, all criteria are
110

minimized separately and next the single objective function is defined as follows

F = w· f (22)

where the vector f has been normalized with respect to the minimum values of all objectives
(ideal solution)
/; = !;fIt (23)
The weight vector has been also normalized in such a way that

Iwl = (w . W)1/2 = 1 (24)

After normalization the best possible value of the objective function is equal to one and it can
be obtained if only a single criterion is optimized. In all other cases the value of this function
shows a distance from the ideal solution.

In the e-constraint method, the objective function vector contains one scalar component (i.e.,
the volume V) and the displacement vector d. It is, therefore, logical to convert the displacement
part of the objective function vector into constraints. The volume of the structure will be always
minimized, but the displacement constraints will change. In this way the entire Pareto set can be
obtained without considering the nonconvexity of the problem. What is very important is that
this approach has never failed and is, in our opinion, the simplest, most dependable and probably
the best possible for this type of problems. The best point to start is the ideal solution fO (see
Eq. 9). It provides lower and upper bounds for the displacements. Based on these bounds, the
grid of the displacement constraints can be constructed using, for example, equal subdivisions.
When this grid is known it simply defines the e-vector for each single objective task.

The minimax approach could not give better results than the e-constraint method. It is,
however, better than the weighting method and should be considered as the substitution for this
method whenever possible.

6. CONCLUSIONS

Vector optimization is a very useful approach in structural engineering when the design of a
structure has to satisfy conflicting objectives, such as minimum volume and displacements. The
computational experience gained at the University of Colorado, Boulder, has indicated that the
e-constraint method is the most appropriate technique for this class of problems. The weighting
method is perhaps easier to understand, but much harder to control, and cannot be applied to non-
convex problems. The computer cost associated with solving realistic multiobjective structural
optimization problems could be too high. It will be, therefore, necessary to improve algorithms,
111

limit as much as possible the decision space, and consider simplifications of material models.
Vector optimization could also be of use in reliability-based design [6) as well as for structural
design code-writers [5).

'T. ACKNOWLEDGEMENTS

This work was supported by the National Science Foundation under Grant No. MSM-8618108.

REFERENCES

1. Duckstein, L., "Multiobjective Optimization in Structural Design: The Model Choice Prob-
lem," New Directions in Optimum Structural Design, Eds. E. Atrek et aI., John Wiley,
1984.

2. Frangopol, D.M. and Klisinski, M., "Material Behavior and Optimum Design of Structural
Systems," Journal of Structural Engineering, ASCE, Vol. 115, No.7, 1989.

3. Frangopol, D.M. and Klisinski, M., "Weight-Strength-Redundancy Interaction in Optimum


Design of Three-Dimensional Brittle-Ductile Trusses," Computers and Structures, Vol. 31,
No. 5,1989.

4. Frangopol, D.M. and Klisinski, M., "Vector Optimization of Structural Systems," Computer
Utilization in Structural Engineering, Ed. J.K. Nelson Jr., ASCE, 1989.

5. Frangopol, D.M. and Corotis, R.B. (Editors), "System Reliability in Structural Analysis,
Design and Optimization," Structural Safety, Special Issue, Vol. 7, Nos. 2-4, 1990.

6. Fu, G. and Frangopol, D.M., "Reliability-Based Vector Optimization of Structural Sys-


tems," Journal of Structural Engineering, Vol. 116, No.8, 1990 (in print).

7. Koski, J., "Multicriterion Optimization in Structural Design," New Directions in Optimum


Structural Design, Eds. E. Atrek et aI., John Wiley, 1984.

8. Levy, R. and Lev, O.E., "Recent Developments in Structural Optimization," Journal of


Structural Engineering, ASCE, Vol. 113, No.9, 1987.

9. Murthy, N.S., Gero, J.S. and Radford, D.A., "Multifunctional Material System Design,"
Journal of Structural Engineering, ASCE, Vol. 112, No. 11, 1986.

10. Osyczka, A., Multicriterion Optimization in Engineering, Ellis Horwood, 1984.

11. Rosenbrock, H.H., "An Automatic Method for Finding the Greatest or Least Value of a
Function," The Computer Journal, No.3, 1960.
MANAGEMENT OF STRUCTURAL SYSTEM RELIABILITY

Gongkang Fu*, Liu Yingwei** & Fred Moses***


*Engineering R&D Bureau
New York State Department of Transportation
Albany, NY 12232, USA
** Aircraft Research and Design Department
Nanchang Aircraft Manufacturing Company
Nanchang, JiangXi, People's Republic of China
***Department of Civil Engineering
Case Western Reserve University
Cleveland, OR 44106, USA

1. INTRODUCTION

Great progress has been reported in the theory of structural reliability for
the past two decades. One milestone is marked by the important developments in the
theory of structural system reliability, due to the well known interest of engineers
and researchers in system behavior instead of only component failure events.

Over the lifetime of a structural system the reliability level of the intact
state should not be the only focus, although this has received a great deal of
attention of researchers. In its expected lifetime a structural system may
experience various types of deterioration or damage due to corrosion, fatigue and/or
fracture, accidental loss of structural material and capacity, to name a few events.
The deterioration and damage may result in a substantial decrease of the structural
reliability level as compared to the intact state. This is observed in many types
of structural systems such as aircrafts, highway bridges, offshore platforms, etc.

The theory of structural system reliability should address those issues such
as effective redundancy implementation, damage tolerability, optimal inspection
schedules, etc. These issues are related to the intermediate states of structural
system performance between the intact and ultimate failure states, which are
sketched in Fig.1. These states deserve more investigation in the development of
structural system reliability theory.

This paper discusses the above failure states in the context of structural
system reliability. This activity is herein referred to as the management of
114

structural system reliability over a performance lifetime. The concepts of


probabilistic measures of damage tolerability and redundancy are discussed here.
The issues of increasing system reliability, redundancy and damage tolerability, and
searching for the optimal inspection schedule are also addressed. An application
example of a highway bridge is included for illustration, in which both new
structural design and existing structure evaluation are discussed. Various damage
states are considered, including corrosion loss, fatigue and accidental collision.

2. CONCEPTS AND ISSUES

2.1 Structural System Reliability

Structural system reliability understood as the system survival probability is


usually measured by its complement, system failure probability

Pf - JxG(X) f(x) dx (1)

where x is random variable vector which usually contains load effects and component
resistances; f(x) is the joint probability density function of vector x and G(x) is
the failure indicator function being unity when the system fails and zero when the
system survives. The integration of Eq. (1) is often difficult to analytically
obtain as the integration hyperspace defined by G(x)-l is usually extremely oddly
shaped. Fortunately, two candidates for approximation are available to provide
practically accurate solution to Eq.(l), namely bounding techniques and Monte Carlo
simulation [Dit1evsen 1979, Fu and Moses 1988]. A conventional system reliability
index fJ is also used, which is converted from Pf by the cumulative probability
function of the standard normal variable ~(.) as follows:

~(-fJ) - Pf (2)

It is important to note that structural system reliability defined above is


not always a comprehensive measure of the entire risk to which the structural system
will be exposed. For example, a system reliability with respect to structural
strength may not include the impact on risk of fatigue failure or accidental loss of
some structural components. One should therefore bear in mind that using a single
system reliability level for the intact system may not be sufficient in a decision
making process, such as adding redundancy, strengthening critical components or
frequency of inspection.
115

2.2 Redundancy and Damage Tolerability of Structural System

Redundancy is usually implied for structural systems, especially when


component capacity uncertainty is significant. However, the widely used term
"redundancy" also causes misunderstanding as well as confusion since even experts
may have such a variety of definitions for this simple-looking concept [Working
Group on Concepts and Techniques 1988). Redundancy has usually been defined in
codes as statical indeterminacy or multiple load paths. However, quantifications of
redundancy may vary according to the purposes of such applications. This is because
redundancy may be understood from various aspects of structural system function [Fu
and Moses 1989). Redundancy has been used to address such contradictory issues as
functions of individual components and the entire system, strengths of reserve and
residual capacities, and measures of deterministic and probabilistic response.

It is also worth noting that a single measure of structural system reliability


may not be comprehensive enough to encompass effective redundancy. For example, for
some cases structural system reliability may be increased by adding a large amount
of reserve component strength, instead of adding alternative load paths for
redundancy. In other situations, where sudden accidental member losses are
possible, redundancy is achieved only by multiple load paths provided checks are
made of the conditional reserve strength given an accident.

Structural designs and evaluations are performed routinely at the level of


components. Probabilistic risk analysis is being more widely accepted in structural
engineering profession as a basic means for establishing code factors. Furthermore,
overloading is addressed by reliability evaluation including the tail effect of
loading random variables. On the other hand, unexpected loadings (accidents, etc.)
are major system requiring redundancy. A probabilistic redundancy measure for
components was therefore suggested by Fu and Moses [1989). Its equivalent form
using the reliability index p, is the component redundancy factor (CRFc), which is
defined as follows:

CRFC - (P - Pic) / P (3)

where Pic is system reliability index given that component C is completely removed
(no longer serviceable). P in Eq. 3 is the system reliability of the intact system.
CRFC defined in Eq. (3) indicates the importance of the specific component to the
system in terms of system failure probability. A higher value of CRFC represents a
more critical component to the system. Based on the same concept above, a damage
tolerability factor (DTFCd) is also proposed here:

DTFCd - Plcd P (4)


116

where Plcd is the reliability index of the conditional system failure given that
component C has experienced some amount but not total damage. This factor gives
unity when the damage state has no impact on the system at all or is therefore
perfectly tolerable. Lower values occur when damage states are less tolerable to
the system.

2.3 Design of New Structures

When a new structure is designed, the management of structural system


reliability should be considered with respect to the above defined redundancy
measures. The structural system reliability of possible design options can then be
compared. Deterioration of capacity over the lifetime should also be taken into
account in design. This can be exercised by investigating the CRFC and DTFCd
defined above. For example, some components may be found with higher CRFC. Efforts
can be made here to either restrict exposure to a severe deterioration environment
or to reduce their CRFC so that their failure would not jeopardize the sys tern
survival. In certain circumstances the optimal solution can be formulated to reach
a balance of effective redundancy including also inspection intervals.

2.4 Maintenance of Existing Structures

The typical operation to monitor and control the evolution of structural


safety over the lifetime is to perform periodic inspections, computer rating
evaluations and do any necessary repairs or replacements. These periodic controls
have the effect of reducing uncertainties about the present structure state and also
reducing loading uncertainties by confining exposure to a specified time interval.

An interesting issue for existing structures to maintain an acceptable safety


level is often to find an optimal inspection schedule. This consists of assessments
of evolution of structural system reliability, quantification of control ac ti vi ty
effects on the reliability such as inspection, repair and rehabilitation, and cost
estimations for above activities and failure consequences. The goal is to find the
best inspection intervals in terms of both economics and safety.

3. AN APPLICATION EXAMPLE

A steel single span highway bridge structure is presented herein for


illustration. The purpose of this example is to explore the impact on system
response and safety due to changes in member design and geometry as well as
117

different design strategies. A typical four equally-spaced girder configuration is


predetermined (Fig.2) and span lengths are varied for 30ft to 180ft. Ll and L2 are
vehicle loads, D is dead load, Rl to R4 are resistances of girders, and rl and r2
are lateral location coefficients of the vehicles off the lane-center-lines. All
random variables are assumed of lognormal distribution except rl and r2 with normal
distributions. The random variables are presumed independent of each other except
the resistances correlated to each other with an equal correlation coefficient of
0.5, and live loads with correlation coefficient of 0.2 to account for maximum load
effect. The remaining statistical parameters are as follows: bias of Lj (j=l,2), D
and Ri (i-l,2,3,4) are given respectively 0.94, 0.98 and 1.1; coefficients of
variation (COV's) of Lj' D and Ri are 30%, 9% and 12% respectively. In order to
compare the influences or lifetime reliability of design procedures, the girders are
designed according to the USA highway bridge code [AASHTO 1983] following both load
factor design (LFD) and traditional working stress design (WSD) respectively.

Variations of system reliability index p are displayed in the attached


figures, which is computed according to Eqs. (1) and (2) assuming ductile girder
failures. The Incremental Loading Method [Moses 1982] is used to identify the
significant failure modes and Importance Sampling method to simulate the system
failure probability Pf [Fu and Moses 1988]. The failure mechanism considered here
is loss of equilibrium of the beam model in transverse direction depicted in Fig.2.
Two significant failure modes are listed in Table la as follows:

Table lao Significant Failure Modes


gl - 3Rl + 2R2 + R3 - 1.5D - (Cl-C2*rl)Ll - (C3+C4*r2)L2

g2 - 3R4 + 2R3 + R2 - 1.5D - (C3- C4*rl)Ll - (Cl-C2*r2)L2

where coefficients Ci (i-I, ... ,4) depend on the girder positions described by the
transverse overhang length L' (see Fig.2b):

Table lb. Failure Mode Coefficients Dependent on Girder Positions

Cl C2 C3 C4
Girders NOT moved in L' - 0 2.250 3.000 0.750 3.000
Girders moved in L' 0.033L 2.304 3.210 0.696 3.210
Girders moved in L' - 0.067L 2.370 3.460 0.630 3.460
Girders moved in L' - 0.1 L 2.440 3.750 0.560 3.750
118

These two modes represent the symmetric significant failure mechanisms. The other
two possible failure modes with failure of either Rl, R3 and R4 or Rl, R2 and R4 are
not comparably significant, and therefore they are not included herein in the system
failure probability assessment.

A number of design options are considered and compared according to the


concepts discussed in the previous section. These design options are defined in
Table 2 below.

Table 2. Four Cases of Design Options

Case I: No overhang for outer girders and equal girder sizes designed according
to the load effect of the internal girder. [This is conventional
practice.] (Fig. 2a)
Case II: Outer girders moved in and equal girder sizes with the same strength as
Case I. [This compares the influence of girder geometry but with no
change in girder costs from Case I.] (Fig. 2b)
Case III: Outer girders moved in and equal girder sizes with the strength based
according to the load effect. [This compares the influence of using
spacing geometry to optimize weight.] (Fig. 2b)
Case IV: Outer girders moved in and unequal girders designed according to
respective load effect. [This compares the influence of optimizing
geometry, spacing, and member sizes.] (Fig. 2c)

3.1 System Reliability

A common design of such bridge structures is to spread the girders as far as


possible (Fig.2a), and to set the external girders equal to the internal ones which
are designed according to the code. Fig.3 compares system reliability and external
and internal component (girder) reliability respectively for design methods using
LFD and WSD. LFD which is similar to probability based load factors shows its
advantage of more uniform component reliability level over span lengths. WSD does
not exhibit uniform component reliabilities with span because longer spans have
higher dead load and uniform stress levels are used. The external girders (Rl and
R4) have higher reliability than the internal girders (R2 and R3), due to more
reserve strength by the girder size equalization with conventional design. In all
cases, the entire system shows very high reliability provided by the effective
parallel girders.

An interesting alternative to the conventional design is to design the outer


girders closer to the center (Fig.2b). For the girders moved in by L' equal to 10%
of the bridge width L, the system reliability index p of these two design options
119

are plotted in Fig.4. It is observed that the conventional design with L' equal to
zero provides slightly higher reliability. For example, in Fig.4 the conventional
design of LFD gives system p equal to 5.59 for the span of 120ft and corresponding
alternative design with L' O.lL yields a value of 5.47. It should be noted that
the resistances (Rl to R4) of the alternative design with L' - O.lL are taken equal
to those of the conventional design (L'-O)in order to have an unbiased comparison.
This is done only for illustration since this alternative design requires less
member capacities girders because of their lower load effect. The savings in member
sizes attracts designers in order to reduce the construction cost. This is
especially occurring when contractors are permitted "value" engineering options to
change bid designs and codes do not contain system constraints. It is obvious that
the system reliability level would be even further decreased if the girder
capacities are designed for the (lower) load effect. This is also shown in Fig.4.
The resulted decrease of system reliability level is due to the reduction of reserve
strength when the outer girders are moved inwards.

Figs. 5 and 6, respectively for LFD and WSD, give more insight in the girder
position influence on system reliability, where external girders are moved by L', a
fraction of the bridge width L.

3.2 System Redundancy and Damage Tolerability

Tables 3a and 3b below display the damage tolerability factor of external and
internal girders of the conventional design (Fig.2a) respectively for damage levels
of 15% and 30% loss of component strengths. Such losses occurred in bridges either
due to material damage such as corrosion or frequently due to collisions with
girders of overpass structures.

Table 3a. DTF of Rl and R2 for 15% Strength Loss - Case I

Span 30 60 90 120 150 180


Length(ft)

LFD (15% .96 .96 .96 .95 .95 .95


loss in R1)
WSD (15% .95 .96 .96 .96 .96 .96
loss in R1)

LFD (15% .97 .97 .96 .97 .96 .96


loss in R2)
WSD (15% .97 .97 .97 .97 .97 .96
loss in R2)
120

Table 3b. DTF of Rl and R2 for 30% Strength Loss - ease I

Span 30 60 90 120 150 180


Length(ft)

LFD (30% .89 .89 .88 .88 .87 .86


loss in Rl)
WSD (30% .89 .89 .89 .89 .89 .88
loss in Rl)

LFD (30% .93 .93 .92 .92 .92 .91


loss in R2)
WSD (30% .93 .93 .93 .93 .93 .93
loss in R2)

These tables show relatively uniform changes over span lengths in system reliability
index fJ due to damage. Table 3 may be used when a practical quantification of fJ
decrease due to damage is needed. For the data used herein, for example, system fJ
is reduced by around 3% due to 15% damage on anyone of the girders (Table 3a). For
higher damage (Table 3b), a 30% strength reduction of an external girder leads to
around 12% decrease of system fJ, and that of an internal girder results in only
about 7% decrease of system fJ.

Fig.7 displays the eRF's (component redundancy factors), for both internal (R2
and R3) and external (Rl and R4) girders of the conventional design. These curves
reflect the impact of the total loss of the girders.

Higher eRF's indicate greater importance. The external girders appear more
important to the system as they contribute more in stability due to their geometric
positions and, in addition, they contain most of reserve strength capacity of the
system. The external girder loss is more likely caused by a collision of an
oversized vehicle, while fatigue failure is the maj or concern for the loss of the
internal girders.

3.3 Global Risks

Pf, total, an estimation of total system failure probability including risks


formulated below, may be very helpful in a cost-oriented optimization:

(5)

where Pe and Pi represent failure probabilities of loss of external girder (by


collision) and internal girder (by fatigue) respectively, and PflRl and PflRz are
conditional system failure probabilities given respectively Rl and R2 are not
serviceable. Such failure probabilities are related to fJlRl and fJlRz used in eRF's
121

above by Eq.(2). Similar application can be exercised for total failure probability
due to partial damage of components (not complete loss) by using Eq. (5) with all
conditional probability definitions also addressing damages such as corrosion.

It is interesting to take a look at the alternative designs with unequal


girders, in which the external girders are designed according to their load effects
(Fig. 2c). Fig.8 shows the system reliability indices fJ of these two optional
designs. A large difference of system reliability is clearly observed due to the
reduction of reserve strength in the external girders. The system reliability for
the unequal girder case is very close to the component reliability levels. This
means that redundancy no longer exists despite the parallel load geometry!

The importance of internal and external girders are also changed in the new
design. The internal girders become more important in the unequal girder design
shown by their higher CRF's in Fig. 9. Some of the values in Fig.9 are above I,
which indicate that the residual reliability index fJ IC for the internal girder is
below zero. Thus, the structure is likely to collapse in the event of a single
member failure.

3.4 Time Variations in Reliability

It is also desired to assess the system reliability over various time


intervals, where the maintenance of the structure system is concerned. For the case
of highway bridges routine periodical inspections are conducted. The maximum load
effect distribution varies with the length of time periods. It has been found by
simulation [Moses and Ghosn 1985) that its mean value increases and its COV
decreases with the length of time interval. The system reliability of this bridge
structure over time interval is shown in Fig.lO where only the variation of load
distribution is included for ease of illustration.

The information provided by Fig.lO is important in the decision process to


determine proper inspection intervals. Costs should enter this process. An example
of optimal inspection interval is the one that minimizes total cost. The total cost
may include the costs of inspections with both economic interest rates as well as
future failure consequences entering the decision process.

4. CONCLUSIONS

Intermediate states of structural systems include those states prior to system


collapse and subsequent to component damage or loss. Quantifications of redundancy
in structural systems are directly related to the intermediate states and are
dependent on the goals of such quantifications. Factors of component redundancy and
damage tolerability are suggested herein using conditional reliability indexes. The
122

application example of highway bridge shows the importance of considering


consequences of component failure on the system reliability and their incorporation
in the optimal design. The conventional design of external girders is shown to
provide higher structural system reliability than the design of external girders
moved toward the center line. The conventional equal-girder design is shown to
provide high levels of redundancy. Optimal design using only member constraints may
produce structures with little reserve to protect against accidental damages.
Assessment of system reliability evolution for various time periods is performed,
which is important for optimal inspection schedule determination.

5. ACKNOWLEDGEMENTS

Supports for this work from the National Science Foundation (Grant ECE85-
16771) and Ohio Department of Transportation (A Reliability Analysis of Permit Loads
on Bridges) are gratefully appreciated.

6. REFERENCES

[1] AASHTO, Standard Specifications for Highway Bridges, 13th Washington, DC, USA,
1983

[2] Ditlevsen,O. "Narrow Reliability Bounds for Structural System", J .Stru.Mech.,


Vol.7, 1979, p.435

[3] Fu,G. & Moses,F. "Probabilistic Concepts of Redundancy and Tolerability for
Structural Systems" (to appear) Proc. ICOSSAR' 89 , San Francisco, CA, USA,
Aug. 1989

[4] FU,G & Moses,F. "Importance Sampling in Structural System Reliability", Proc.
5th ASCE Specialty Conference (Ed.) by P.D.Spanos, Blacksburg, VA, USA, May
25-27, 1988, p.340

[5] Fu,G. & Moses,F. "Lifetime System Reliability Models with Application to
Highway Bridges", (Ed.) by N.C.Lind, Proc. ICASP5, Vancouver, B.C., Canada,
May 25-29, 1987, p.71

[6] Moses, F. "System Reliability Developments in Structural Engineering" ,


Structural Safety, Vol.l, 1982, p.3

[7] Moses,F. and Ghosn,M. "A Comprehensive Study of Bridge Loads and Reliability",
Report No.FHWA/OH-85/005, Department of Civil Engineering, Case Western
Reserve University, Cleveland, OH, Jan. 1985

[8] Working Group on Concepts and Techniques: Position Paper, New Directions in
Structural System Reliability (Ed.) by D.M.Frangopol, Proc. of A Workshop on
Research Needs for Applications of System Reliability Concepts and Techniques
in Structural Analysis, Design and Optimization, Boulder, CO, USA, Sept.12-14,
1988, p.363
123

Structural
System States
Addressed by
Structural
System Reliability
~
I
...--
~

I
I
Interm diate I

l
States Addressrd by
Structur~1
Component Reliability
\ /'
Intact~----~~~--~----~~
nVlronmen a
Hazard
Fig. 1 Intermediate States in
Structural Reliability Theory
124

r+h~
I
~~ I
D
II
I R
I
R
IR
L

L
4
~ ~~~
a) Conventional Design - Case I

L'
-r- I I I
L

b) Alternative Design 1: Girder Moved-in - Case II, III

I I I

c) Alternative Design 2: Unequal Girders - Case IV

Fig. 2 Equally-spaced Girder Bridge


125

o
OJ
(f)
\ , I

\ .. "
I
,-:~
I

o
4- +-'
C 'Q\ ~ ..\ ,
ill ~ \ \-... '.~ ~ \ ,~ o
C ~\ \.t ...,. - ~ I &! to
>,0 ..,. \ \~"'';, ~\ iI"."... ,--

..,cz"
U1.'

+-' Q ~~,\ ~ .... ~ ~~ \, I~


18.
E \\ \8.. ~ o!t;.
-:;:;
;8
\ Ie
:) \ 8 \
-.0 0 C\I

roU
\
\ ..
\
\ :1 ,-- .!:
+-'
01
c
\ '. \ I ~
(])u
0: §
\ '.
\ '. \. ! o
0'1
c
ro
\\
'\'
\\, \
Q.
(f)

.'
E \\
\ 1
,I
'\
ill '\ ~ o
+-' \ to
~I·
(f)
~
." ~
\\ I'
\
~
u
(J) \\
,.
\
1 '
I '
o
to C\JC'1

Optional designs
7.00 .----------------------------------------------------------,
L' = 10:1 L

6.50 \ t.,.sul.-----~ ..
---"
I\t\~---- ...... ··\;,.sU)
g>~~-- .... "W;"ed~\1\

In
+-'
6.00
_-- ----
-- _-- --~., .. ' (;\t'det'

convent.~na ----=-:-J
,~!!:!l~tUll~!.) ___ -
(l)
_---- ~~------. _------~v;d~i; tUll)
CD
E
(l)
5.50
--- _---- _---~~-------- Clrder

+-'
(f) 5.00
:>.
(j)

4.50 Case III _------- Clrder Hcwed-in (LF.~,:-~~~~_ gi~~~::"._


.---.-.:~..-/-'.:::::::.:~-.-------.-.-------------
4.00
.. -- --
3.50
30. 60 90. 120. 150. 180
span length <ft>
126

Flg.5 System Beta vs. Girder


POSitions (LFO)
6.00 ,----------~~------------------------,

Case II

L' = 0.033 L ---"


". ~
--- --- ---
0.067 L
5.50 ---
0.100 L

!\J
+-'
(]) 5.00
aJ
E
(])
+-'
(f)
» 4.50
(j)

4.00

3.50
30. 60. 90. '120. 150. 180.
soan lenqth (ft)
FIQ.6 System Beta vs. Girder
Positions (\lVSO)
7.00 , - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ,
Case II

6.50

6.00
!\J
+-'
(])
aJ 5.50
E
(])
+-'
(f) 5.00
»
(j)

4.50

4.00
--,
3.50
30. 60. 90. 120. 150. 180.
span length (ft)
127

Fig.? COlllpOnent Redundancy


Factor(Conventional Design)
0.80
Case I

0.70

C\I
[[ 0.60 -----cRf;,(IISOl--- ---------
D
c
iU
0.50
[[
cRF (LfD)
4- 0040 -~-----
0
LL .. CRF~· (lisD)·
[[ 0.30
U

0.20

0.10
30 60 90 120 150 180
span length <tt)

FIG.8 Alternative Designs:


Equal & Unequal Girders
8

7
1 girderS (II5D)___ - - - - - - - - -
~~~---------
6 ----
iU
Case I
--- Equal girders (LfD)
+-' /-
(])
m 5 ------'-'"--
Unequal girders ..l~??
....................
...... .
E
(])
+-'
4
:- ~ -: . ~.~ ;::e_ ~v"~~ .:.~.~ .~~~~.~ .~~.~ __________________ -Unequa'- gl;::d-;';::s- elF-D). __
~ 3
(()

o ~ ______ ~L ________ ~ ________ ~ _________ ~I ______ ~

30. 60. 90. 1 20. 1 50. 180.


span length <ft)
128

FIQ.9 Component Redundancy


F actor(Unequa I Glrdel' Design)
1.30
1.20 Case IV

1.10
C\J 1.00
[[
CRFR2 (II5D)
u 0.90
c

..
(Ij
0.80
[[ 0.70
4-
0 0.60
LL 0.50
[[
U ----------
- -- -- - -- --CRfR~ (IIsD)
0.40
0.30 -

0.20
0.10
30 60 90 120 150 180
span length (ft)

FIG. -1 0 System Reliability


VS. Number of Inspectlon(Tlme Interal)
LFD - ---- WSD

7 .00 r -_ _ _ _ _ _ _ _S~P~AN~L~EN~GT!.!.!H_=~12~O_'_FT.:..:.._ _ _ _ _ _ _ _ __,

6.50

-------------------- ---------------- ----------


.0
6.00
(Ij

QJ
[[ 5.50
E
QJ
......
~ 5.00
(j)

4.50

4.00 L -_ _ _ _ _-.-JL-_ _ _ _---'-_ _ _ _ _--'-_ _ _ _ _......J

1 (50 years) 2 (25 years)5 (10 years) 10(5 years) 25(2 years)

Number of Inspection(Time Interall


RELIABILITY ANALYSIS OF EXISTING STRUCTURES
FOR EARTHQUAKE LOADS

Hitoshi Furuta*, Masata Sugito**, Shin-ya Yamamoto*** & Naruhito Shiraishi*


*Department of Civil Engineering
Kyoto University, Kyoto 606, Japan
**Department of Transportation Engineering
Kyoto University, Kyoto 606, Japan
***Mitsubishi Heavy Industries, Hiroshima 733, Japan

SUMMARY

This paper provides a reliability analysis of existing bridge structures for earthquake loads. In the
reliability analysis, ultimate limit state is defined in terms of displacement instead of load effect or
stress. It is assumed that failure occurs when the maximum response displacement becomes larger
than the allowable displacement prescribed. The maximum displacement is calculated from the non-
stationary power spectrum of earthquake. This response analysis can include the effects of inelastic
behavior of material, natural frequency of structures, ground condition and seismic zoning. A numerical
example is presented to demonstrate the applicability of the method proposed here.

INTRODUCTION

In Japan, cO'nsideratiO'n fO'r earthquake is quite impO'rtant to' ensure the reliability O'f existing
structures during their lifetime. In the reliability analysis O'f the existing structures, we can utilize
mO're exact infO'rmatiO'n than that at the design stage. Namely, we can O'btain reliable infO'rmation
with respect to' the grO'und cO'nditiO'n, the change O'f traffic lO'ads, the O'ccurrence O'f earthquakes,
and the deteriO'ratiO'n O'f structural resistance, using variO'us data O'f O'bservatiO'n, field test and
labO'ratO'ry test.
In this paper, an attempt is made to' develO'P a methO'd O'f evaluating the reliability O'f existing
bridge structures fO'r earthquake lO'ads. In general, pier structures are likely to' suffer frO'm damage
due to' earthquakes. Hence, attentiO'n is paid to' the reliability analysis O'f bridge piers, especially
reinfO'rced cO'ncrete (RC) piers. In the reliability analysis, the ultimate limit state is merely cO'n-
sidered fO'r PO'ssible limit states. The limit state is specified in terms O'f displacement instead O'f
IO'ad effect O'r stress. It is assumed that failure O'ccurs when the maximum reSPO'nse displacement
130

becomes larger than the allowable displacement prescribed. This formulation is intended to take
into account the effect of plastic behavior of RC piers after the yielding of reinforcing steel in
the reliability assessment of earthquake resistant structures, without difficulty. The maximum
displacement is calculated from the non-stationary power spectrum of earthquake motion I). This
response analysis can include the effects of natural frequency of structures, ground condition and
seismic zoning in addition to the effect of inelastic behavior of material. A numerical example is
presented to demonstrate the applicability of the method proposed here.

SEISMIC RESPONSE MODEL USING NON-STATIONARY POWER SPECTRUM

In this paper, the response of structures is related to the intensity of non-stationary power
spectrum of input motion. As a representative parameter of structural response, maximum dis-
placement of the top of piers is employed instead of maximum force or stress, because failure of
the pier structure can be easily expressed in terms of displacement if the yield displacement and
ductility factor are known.
In the past studies 2) , the earthquake load is estimated on the basis of some approximate relation
between the maximum acceleration of ground and the maximum response of structures. However,
the structural response is greatly influenced by the natural frequency of the underlying structures.
Moreover, the duration of earthquake motion affects the failure path that RC structures reach their
failures through the successive process of alternative deformations due to the earthquake motion.
Considering these facts, it is desirable to take into consideration the non-stationary characteristic
of earthquake motion. Especially, when estimating the maximum response deformation, it is
preferable to use the non-stationary power spectrum instead of the maximum acceleration of input
motion, because the former can account for the spectral characteristic of input ground motion.
An attempt is made here to estimate the structural response which covers both the elastic
and inelastic regions, by paying attention to the intensity parameter of non-stationary power
spectrum. The acceleration of earthquake is written as follows, considering the non-stationary
characteristic 3),4) •

X(t) = L .j2GX(t,Wk)tlwcOS(Wk t + ¢>k) (1 )


k~1

where GX(t,Wk) is the non-stationary power spectrum for circular frequency wand time t, tlw is the
discretization step of w, ¢>k is the phase of t = o. The square root of Gx(t, Wk) is given as

;0 :5 t :5 t,
(2)
t, < t
131

where cxm(l) is the maximum value of VGx(t,w) and its unit is gal/vrad/see, t.(I) is the starting
time, and tp(l) is the time from t.(I) to the appearance time of the maximum value of cxm(l). cxm(l)
is given by the following regression equation with respect to the frequency 1 (Hz), magnitude M,
and epicentral distance .:l .

(3)

where Bo(l), B1(I) and B 2(1) are the coefficients which have been given from the regression analysis
based on Japanese strong motion data and are functions of frequency I.
Since cxm(l) given by Eq. 3 denotes the intensity at the bedrock, it does not involve the effect
of local ground condition overlying bedrock. Using the conversion factor (3.(1)5), it is possible to
introduce its effect on nonlinear soil amplification into the estimation of intensity parameter on
soil surface. Then, (3.(1) is given as follows:

log (3.(1) = uo(l) + UI(l)CXm(l)


I> 1.0 { (4)
log (3.(1) = uo(l) + UI(j)CX~(j) j O'm < O:':n

1 :'5 1.0 log (3.(1) = uo(l) (5)

where Uo and UI are the functions of frequency I, ground parameter Sn and the depth to the bedrock
dp • Sn is evaluated from blow-counter profile. a~(I) denotes the folding point of stress-strain curve.
Consequently, the intensity parameter at the ground am,(I) is calculated as

(6)

In this paper, elasto-plastic analysis is performed by use of a bi-linear model. In general, many
degrading tri-linear model, i.e., Takeda model, Mutoh model and Fukuda model, have been used for
the yielding failure of reinforced concrete members. However, this study aims to show the efficiency
of the method that estimates the maximum response displacement. To make the discussion simple,
a bi-linear model is employed instead of the tri-linear models.
For the variation of ground motion levels, fifteen combinations of magnitude and epicentral
distance are examined. Table 1 presents the ground conditions used in the numerical computation.
Ground 1 in Table 1 denotes the bedrock. In the elasto-plastic response analysis, it is necessary to
determine the yielding displacement at the top of piers. However, the determination of the yielding
displacement is very difficult, because it is affected by the damage state of concrete. Using such
a characteristic that velocity response spectrum is constant regardless the natural frequency, it is
possible to calculate the yielding displacement Oy. Namely, Oy is estimated by dividing the response
velocity by the natural circular frequency. A parametric analysis is performed for eight cases
with V(veloeity) = 2.0, 5.0, 8.0, 10.0, 20.0, 30.0, 40.0 and 50.0 em/sec. For the natural frequency,
ten cases are considered as 10 = 0.2, 0.5, 0.7, 1.0, 2.0, 3.0, 4.0, 5.0, 7.0 and 10.0 Hz, and also
two dampig factors h = 0.05 and h = 0.1 are considered. Table 2 provides the comparison of the
132

Table 1 Ground Condition

Ground 1 Ground 2 Ground 3 Ground 4


Sn - -0.2 0.2 0.8
dp(m) - 20.0 50.0 150.0

Table 2 Results og Regression Analysis

Ground 1 Ground 2 Ground 3 Ground 4

f. 0.05 0.1 0.05 0.1 0.05 0.1 0.05 0.1

0.19 a. 0.989 0.997 0.942 0.997 0.996 0.998 0.998 0.999


-... ----- ---- ........ .. .... - ...... .. ............ . ............ .............. ............. .. _--_ ....
...... "'!"' ....

A.ax 0.929 0:'960 0.961 0.976 0.984 0.987 0.961 0.958

0.49 a. 0.984 0.992 0.983 0.986 0.972 0.987 0.986 0.994


- ............. .. - ........ - - ............ ............... - ...... _.... ........ - .... .. ............ ........ -_ .. ..- ..........
A.ax 0.974 0.987 *0.984 *0.988 *0.982 *0.988 0.962 0.974

0.73 a. 0.967 0.979 0.969 0.974 0.986 0.977 0.982 0.911


.............. .. ............. .. _-_ ....... .. _-- .... - .............. .............. ............... ...............
----_ ....
A.ax 0.942 0.958 *0.974 *0.976 *0.991 *0.985 *0.984 0.976

0.97 a. 0.993 0.993 0.990 0.993 0.965 0.977 0.938 0.934


_............ ............... ............... ...... ---- ................ .. _- ........ .............. ............ - .. ...... - ....
A.ax 0.991 0.992 0.989 0.989 *0.975 0.971 0.906 0.889

1. 99 a. 0.993 0.993 0.984 0.987 0.981 0.986 0.992 0.989


........ _.... ........ __ .. ............... .............. .............. .. .... _...... .............. .......... _- .. ---_ ....
A.ax 0.993 0.993 0.984 0.981 0.963 0.972 0.989 0.985

3.01 a. 0.899 0.913 0.919 0.923 0.917 0.889 0.877 0.891


_._ .. _-- ...... ---- .............. ........ __ .. ........ __ .. .. ......... _- - ............ --_ ........ .... -_ ......
A,ux *0.927 *0.939 *0.964 *0.966 *0.975 *0.959 *0.919 *0.925

4. 03 a. 0.921 0.941 0.948 0.948 0.929 0.934 0.935 0.971


------- ............... ............... .............. .............. .. ............ - ............ .... -_ .. _.. .. ....... _-
A.ax *0.952 *0.970 *0.976 *0.982 *0.981 *0.990 *0.967 *0.992

4.99 a. 0.992 0.995 0.986 0.961 0.977 0.983 0.969 0.967


.............. ............. ...... _...... - ............ ........ ..... ....... -_ .. -_.- .. - .- ...... _- .. ............
A,.. x 0.988 *0.998 0.970 0.977 0.973 0.988 0.990 0.989

7.03 a. 0.980 0.979 0.956 0.949 0.940 0,922 0.910 0.897


_............ ...... _.. _- ...... _...... .. - ....... _- .............. .. ............ .............. ....... -_. _...... - .....
A•• x *0.998 *0.999 *0.998 *0.999 *0.998 *0.996 *0.958 *0.950

10.03 a. 0.993 0.991 0.975 0.972 0.972 0.970 0.967 0.957


_.... _...... ...... - ...... - -- ....... - .......... _.. .............. .............. .. .... _...... ... .. _.. _-- .. ...... -....
A.ax 0.981 0.985 0.963 0.971 0.956 0.963 *0.996 *0.997
133

proposed method and the method based on the maximum acceleration Am ••. The values in Table 2
are the correlation coefficients which are obtained by the both methods for the case of V = 30.0
em/sec. As the correlation coefficients are closer to unity, they are better solutions. The values with
symbol * shows that the method using the maximum acceleration presents better results than the
proposed method. From Table 2, it is seen that the proposed method prefers to the method using
the maximum acceleration when the natural frequency is less than 2.0 Hz. Although there are
several values with the symbol * in the cases of fo = 0.5 and 0.7 Hz, the differences are negligibly
small. This tendency is due to the fact that when fo is greater than 2.0 Hz, the structural response
strongly depends on ground motions in high frequency ranges, and hence that the maximum
acceleration shows a well correspondence to the maximum response displacement. It is also due to
that the assumption that the response velocity is constant is valid for the large frequency region.
Consequently, it is concluded that the proposed method is suitable for the analysis of structures
with long natural period such as towers of suspension bridge and high-rise buildings.
The relation between CJ:m,(fo) and Ym•x is influenced by the natural frequency, damping char-
acteristic and yielding displacement. The yielding displacement may be different for structures
with the same natural frequency. Hence, the relation between CJ:m.(fo) and Ym•• is calculated for
various values of natural frequency through the elasto-plastic analysis. Table 3 shows the inclining
angles of regression curve and the correlation coefficients obtained here. This table can be used to
estimate an approximate value of angle B with the aid of interpolation procedure.
The maximum response displacement of the top of pier structures can be easily calculated using
the value of Band Eq. 6.

(7)

SEISMIC RELIABILITY ANALYSIS OF EXISTING STRUCTURES

In this paper, the analytical process shown in Fig. 1 is employed to evaluate the seismic reliability
of existing structures. The occurrence pattern of earthquake varies according to the location of
active fault. Therefore, seismic zoning is carried out by considering the effect of active faults. For
two big cities in Japan, i.e., Tokyo and Osaka, seismic zoning is performed using circles with the
same center and radial rays. In the zoning, the maximum value of epicentral distance is 300 km,
whereas the minimum values depend on each zoU:e.
The cumulative distribution function of earthquake magnitude FM(m) is given in the form of
the empirical law by Gutenberg- Richter 6).

1 - exp[-bl In(lO)(m - Mu)l


FMl(m) = P.(M < m I Mu :::; m :::; MUl) = 1 _ exp[-bk In(lO)(MUk _ Mu)l (8)
134

Table 3 Inclining Angles and Coefficients of Correlation

h:O.05 R:Coefficient of Correlation


B
fe Ground 1 Ground 2 Ground 3 Ground 4
R
0.2 ....~ .... 9dm~;!L 9:m&~;!L 9d~Jli~!L 9:lij~.mL
r o. 9895 0.9918 O. 9943 0.9891
0.5 ....~ .... 9:9HL ... 9:~m..... 9:~J~L ... 9:J9JL ...
r o. 9832 o. 9847 0.9813 O. 9740
0.7 ....~ .... 9:mL .. ~ 9:§m •.... 9:§9§L ... 9:mL ...
r o. 9852 0: 9767 O. 9411 0.8909
1· 0 ....~ .... 9:j§~L ... 9:li9~L ...
9:mL ... 9:~m .....
r O. 9852 o. 9767 o. 9411 o. 8909
2.0 ....~ .... 9:mL ... 9:J?§L ... 9:mL ... 9:mL ...
r o. 9919 o. 9666 0.9813 o. 9930

h:O.l
B
f0 Ground 1 Ground 2 Ground 3 Ground 4
R
B o. 3664Et 1 ...............
0.4015Etl __ .... ..O.....4482Etl
_.............. O. 4870Etl
.......................
0.2
r o~ 9973····· O. 9974 o. 9985 o. 9964
o. 5 ....~ .... 9:Jm..... 9:mL ... 9:J~JL ... 9:&m.....
r o. 9917 0.9881 o. 9935 O. 9820
0.7 ....~ .... 9:§~~L ... 9:§~§L ... 9:§~~L ... 9:H9L ...
r 0.9811 O. 9822 o. 9897 0.9751
1.0 ....~ .... 9:mL ... O~·9805····
O. 4059 9:?&&L ...
. 0.9461 9:~m.....
r o. 9879 o. 91087
2.0 ....~ .... 9:Jm..... 9:mL ... 9:mL ...
O. 9894 o. 9625 O. 9827
um
O. 9959
.....

Probabilistic Characteristics Allowable Displacement


Magnitude, Occurrence, y.
Location

Response Analtsis
Maximum Displacement
YmG.1:

Reliability Analysis
Z = Yo - Yma~
PI : Failure Probability

Fig. 1 Process of Reliability Analysis


135

where Mu and MUk are the maximum and minimum magnitudes in the k-th zone, and bk is the
b-value in the k-th zone. Eq. 8 denotes the cumulative distribution of magnitude of earthquake
occurred in the k-th zone. Denoting the area of the k-th zone by Ai, the occurrence number of
earthquake within t, years can be calculated as VkAkt,. Vi is the occurrence number of earthquake per
year. It is assumed that the magnitude of n earthquakes occurred in the k-th zone M l , M 2 , · · · , M"are
independent each other and follow the same probability distribution FMk(m). Then, the distribution
of the maximum value Z among Ml to M .. can be obtained as

..
Fz(z) = P,(Z ~ z) = p,(n M; ~ z) = {Fz(z)}" (9)

Replacing n by viAk t" the probability density function of the maximum magnitude within tT years
can be derived as

(10)

Obviously, Eq. 10 is calculated by differentiating Eq. 8.


The occurrence location of earthquake in each zone is considered to be independent of the year
and have the linear distribution such as

(ll)

where Au and AUk are the lower and upper bounds of epicentral distance, respectively. Using the
probability density functions given by Eqs. 10 and 11, the maximum response displacement Y maz
can be calculated based on the relation expressed by Eq. 7. In the calculation, the allowable dis-
placement Ya, earthquake magnitude M and epicentral distance A are treated as random variables,
while others are treated as deterministic quantities.
Here, failure probability is calculated based on AFOSM(Advanced First Order Second-Moment
Method) 7). The reason is that AFOSM can include any kind of random variable and deal with
non-normal variables as well as normal variables through a transformation from non-normal to
normal distributions. The limit state function used here is as follows:

(12)

where f3a(JO) can be given by Eqs. 4 and 5, and "m(JO) is given by Eq. 3.

NUMERICAL EXAMPLE

A numerical example is presented to describe the method developed here. In this example,
seismic reliability is evaluated paying attention to the response displacement of the top of piers, in
136

which the ductility factor and the coefficient of variation of allowable displacement are assumed to
be 4 and 0.1, respectively.
Consider a pier whose natural frequency period is 0.626 sec and yielding displacement is 0.618
em. Table 4 and Table 5 present the failure probabilities obtained for two big cities in Japan, i.e.,
Tokyo and Osaka. These results are calculated by AFOSM and FOSM(First Order Second-Moment
Method). The results by FOSM are compared with those by AFOSM. AFOSM are inefficient for
some cases where the design points needed in AFOSM can not exist beyond the lower bound of
epicentral distance. This phenomenon may be due to the fact that when the magnitude is rather
small, the design point for the variable of epicentral distance is smaller than the lower bound
prescribed by the past earthquake records. This leads to the underestimation of failure probability
in the underlying zone. For the reason, two failure probabilities are calculated, where the former
provides the upper bound and the latter provides the lower bound of failure probabilities. Namely,
true failure probabilities lie between these two probabilities.
In general, earthquakes occur in Tokyo more frequently than in Osaka. However, Table 4 shows
that the failure probability for Tokyo becomes smaller than that of Osaka after 100 years. One
reason of this result is that only several zones have higher occurrence rates of earthquake in Tokyo,
while all zones have uniform occurrence rates. Comparing the occurrence rates of some zones in
Tokyo and Osaka, it is evident that the zones in Tokyo have much more occurrence rates than
those of the zones in Osaka. Therefore, the failure probabilities within about 90 years in Tokyo is
smaller. However, after 100 years, the total failure probability over all Osaka region proceeds to
that of Tokyo. Another reason may be the approximation error with regards to the transformation
from non-normal variables to normal variables.
Comparing both the results by AFOSM and FOSM, there are significant differences for the
failure probabilities within 10 and 20 years. For these two cases, FOSM provides zero probabilities
whereas AFOSM provides 0.0393 and 0.0670. Naturally, the solutions by AFOSM are more reliable
than FOSM. This discrepancy may be due to such approximation employed in FOSM as Taylor
expansion for the linearization of limit sate function and the ignorance of the information regarding
the probability distribution function.

CONCLUSION

In this paper, an attempt was made to develop a simple method of evaluating the seismic reli-
ability of existing structures. To introduce the spectral characteristic of earthquake, the intensity
parameter of non-stationary spectrum was utilized for estimating the maximum response displace-
ment in a simple manner. The reliability analysis was performed paying attention to the response
displacement of the top of pier. This enables to make the calculation of elasto-plastic analysis sim-
137

Table 4 Cmparision of Failure Probabilities(AFOSM)

Year Tokyo Osaka


10 0.857397E-1 0.859703E-1 0.393431E-1
20 0.119889 0.120344 0.689941E-1
30 0.142386 0.143138 0.945736E-1
40 0.159350 0.160407 0.117471
50 0.173082 0.174408 0.138367
60 0.184705 0.186245 0.157669
70 0.194840 0.196536 0.175645
80 0.203867 0.205663 0.192993
90 0.212037 0.213889 0.208340
100 0.219521 0.221385 0.223359

Table 5 Cmparision of Failure Probabilities(FOSM)

Year Tokyo Osaka


10 O.183105E-3 0
20 0.189668E-2 0
30 0.486881E-2 O.119209E-6
40 O.825227E-2 O.447035E-5
50 O. 116341E-1 0.463720E-4
60 0.148569E-1 O.230193E-3
70 O.178819E-1 O. 728964E-3
80 0.207113E-1 O.122874E-2
90 O.233752E-1 0.335276E-2
100 O.258992E-1 0.567657E-2
138

pier. Needless to say, it is noted that this simplification must be checked by performing a sufficient
number of numerical calculation. Moreover, it is necessary to classify the structural behavior into
the elastic behavior and inelastic behavior. Using the classification, it is possible to obtain a more
reliable regression curve for the estimation of maximum response displacement.

REFERENCES

1) Kameda, H.: Evolutionary Spectra of Seismogram by Multifilter, Jour. of Eng., Mech., Div.,
ASCE, VoUOl, pp.787-801, 1975.
2) Kanda, J.: Safety Evaluation of Inelastic Building Structures in a Seismic Region, Proc. of
Korea-Japan Joint Seminar on Emerging Technologies in Structural Engineering and Mechanics,
pp.216-225, 1988.
3) Goto, H., Sugito, M., Kameda, H., Saito, H. and Ohtaki, K.: Prediction of Nonstationary
Earthquake Motions for Moderate and Great Earthquakes on Rock Surface, Annuals, Disaster
Prevention Research Institute, Kyoto Univ., No.27, B-2, pp.19-48, 1984.
4) Sugito, M. and Kameda, H.: Prediction of Nonsationary Earthquake Motions on Rock Surface,
Proc. of JSCE Structural Eng./Earthquake, Vol.2, No.2, pp.149-159, 1985.
5) Sugito, M., Goto, H. and Takeyama, S.: Conversion Factor between Earthquake Motion on Soil
Surface and Rock Surface with Nonlinear Soil Amplification Effect, Proc. of 7th Japan Earthquake
Engineering Symposium, pp.571-576, 1986.
6) Kameda, H. and Takagi, H.: Seismic Hazard Estimation Based on Non- Poisson Earthquake
Occurrence, The Memoirs of the Faculty of Eng., Kyoto Univ., Vo1.l3, pp.397-433, 1981.
7) Toft-Christensen, P. and Baker, M.J.: Structural Reliability Theory and Its Applications,
Springer-Verlag, 1982.
SENSITIVITY ANALYSIS OF STRUCTURES BY
VIRTUAL DISTORTION METHOD

J. T. Gierlinski*, J. Holnicki-Szulc** & J. D. S~rensen***


*WS Atkins, London, UK
**Polish Academy of Sciences, Warsaw, Poland
***University of Aalborg, Denmark

1. Introduction
Deterministically and reliability based structural optimization are very active research areas. Some
of the reasons are the recent developments of effective mathematical optimization algorithms, such
as the NLPQL algorithm, Schittkowski [1] and the VMCWD algorithm, Powell [2], and of the
first-order reliability methods (FORM), see Madsen et al. [3]. Also the rapid growth of computing
power has been very important.
Most effective optimization algorithms require that the derivatives of the objective function and
the constraints are determined with high accuracy. Usually, quasi-analytical derivatives are used
in structural optimization, see Haftka [4].
The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers
an efficient approach to calculation of the sensitivity derivatives. This method has been orginally
applied to structural remodelling and collapse analysis, see Holnicki-Szulc & Gierlinski [5] and
Gierlinski & Holnicki-Szulc [6]. Also some aspect of using VDM to optimization has been discussed,
see Holnicki-Szulc & Gierlinski [7] and Holnicki-Szulc [8]. Calculation of the derivatives by VDM is
based on the same approach as employed in the structural remodelling. Responses corresponding
to two neighbouring modifications are calculated first, and then the derivatives of response with
respect to the design parameters are approximated using the finite difference technique. These
response derivatives are essential ingredients of the sensitivity derivatives, and the most expensive
to calculate.
In section 2 a deterministic structural optimization problem is considered and it is shown how
the derivatives of the structural response (displacements and local forces) can be estimated quasi-
analytically.
In section 3 reliability-based structural optimization problems are formulated. Some of the quanti-
ties modelling the structural system are assumed to be modelled by stochastic variables. Examples
are the yield stress and the loading (e.g. wind and wave loads). The reliability of the structural
system is measured by reliability indices estimated using FORM, see Madsen et al. [3]. It is shown
how the derivatives of the reliability indices with respect to the design variables can be determined.
VDM can be used to estimate the derivatives of the structural response in an effective way as
described in section 4. A brief description of the VDM theory in relation to the structural remo-
delling, and discussion of how this can be used to estimate the derivatives of the structural response
in an effective way are also given in section 4.
Finally, in section 5 a simple numerical example is shown.
140

2. Deterministic structural optimization


The deterministic structural optimization (design) problem is written
mm W(z) (1)
s.t. bi(z) ~ 0 i = 1,2, ... ,m (2)
z: :S Zi :S zf i=1,2, ... ,N (3)
where the optimization variables : z = (Zl, Z2, ••. , Z N) can be related to the dimensions of the
strructural elements (sizing variables), the geometry of the structure (shape variables) or the
configuration of the structural system. The objective function W(z) is usually connected to the
structural weight or cost. The constraints signify that the stresses, displacements, etc. have to
observe certain code-specified requirements. z: and zf, i = 1,2, ... , N are lower and upper bounds
on the optimization variables.
First order optimization algorithms require that ~:v and ~ are calculated. Usually ~:v can easily
be estimated numerically. J J J

If the ith constraint is related to the displacement of a given degree of freedom in a finite element
model of the structural system and V = (VI, V2 , ... , VN K) denotes the displacement degrees of
freedom then the ith constraint in (2) can be written
(4)
where Vmax is the maximum permissible displacement and I is the degree of freedom corresponding
to the ith constraint. Then

db i = OVI = {K-\OP _ OKV)} (5)


dZj OZj OZj OZj I

where K is the stiffness matrix with dimension N K x N K, P is the load vector and {.} I signifies
the Ith element in the vector.
It is usually cheap to estimate

g-;. and :~ (analytically or numerically) from a numerical point of
J J
VIew.

If the ith constraint for example models a stress constraint then it is written
bi(z, V(z)) =0 (6)
One "classical" method to estimate ~ is to use quasi-analytical derivatives, see Haftka [4] :
J

db i obi obi T oV
-=-+-=
dZj OZj oV -OZj
T - =
= obi + O~ K - \ oP _ oK V) (7)
OZj oV OZj OZj

At each iteration in a first order optimization algorithm (which requires function value and deri-
vatives) the number of solutions of the FEM problem (so-called large problem) are : (N + 1).
Another method to estimate ~ is the adjoint method :
J

(8)
141

At each iteration the number of solutions of FEM problems (large problems) is : (m + 1).
Which of these two methods is the most effective thus depends on the number of optimization
variables N and the number of constaints m.

3. Reliability-based structural optimization


In reliability-based structural optimization a number of formulations can be used. If the reliability
of the structure is measured by element reliability indices then the reliability based optimization
problem can be formulated as, see S(1lrensen [9] and S(1lrensen & Thoft-Christensen [10]

min W(z) (9)


s.t. (3i (z) ;:::: (3'('in i = 1,2, ... ,m (10)
Bi(Z) ;:::: 0 i=I,2, ... ,M (11)
z! :S Zi :S zi i=I,2, ... ,N (12)

(31, ... , (3m are the reliability indices of the m potential failure elements, (3'["in, i = 1,2, ... , m the
corresponding minimum reliability indices and (11) models M deterministic constraints.
The reliability indices are determined by, see Madsen et al. [3] :

(3i = m]n (uTu)t (13)


go (u,z,b(z) )=0

where u = (U1, ... , un) are realizations of standardized normal stochastic variables U which are
connected with the stochastic variables X by X = T(U). X models the uncertainty related to the
quantities and models describing the load and/or strength of the structural elements. The stiffness
properties are assumed to be deterministic. Realizations "if of X where the failure function 9 :S 0
correspond to failure states, while realizations "if where 9 > 0 correspond to safe states.
Alternatively the reliability-based optimization problem can be formulated with a system reliability
constraint. In this case (10) is changed with the constraint

(38 (z) ;:::: (3min (14)

If the failure functions are written

gi(U, z, b(z)) =0 ,i = 1,2, ... ,m (15)

then the derivatives of the element reliability constraints with respect to the optimization variables
can be determined from
d(3i 1 [09i ~ Ogi Ob,,] (16)
dz·) = IV" g'l, oz') + "=1
L...Job"oz·)
where all terms are calculated at the design point (the solution point to the optimization problem
(13)). ~ and U; are usually estimated numerically or analytically without significant computer
costs. The term ~ is determined as described in section 2.
J

The derivatives of the system reliability constraint with respect to the optimization variables can
be determined from
d(38 m 0(3" O(3i m 0(3" 0Pi"
--~--+2~-- (17)
dz'J - L...J
i=1
0(3·' oz'
J
L...J oP'1:
i<1: '
oz'J
142

where 0'"
~zi
U J
is determined by (16). 0'"
~ 0'"=-
OPi' 0Pi.
and ==
OM,'.
OZj
can be estimated as descri b ed in Sorensen
[11].

4. Sensitivity analysis using VDM


In this section it is described how the sensitivity coefficients ~ can be estimated using VDM.
However, let us first consider the basic concept of this method i~ application to truss structures.
As design variables dimensions of cross-sections in truss structures are used (e.g. cross-sectional
areas Ai , i = 1, ... , N A)' The design variables are grouped in N groups, i.e. the number of design
variables is N. Thus the number of affected members in the structural model is N A .
The Virtual Distortion Method is based on the requirement that deformations and forces in so-
called modified and distorted structure should be equal. These requirements can be expressed by
considering element i, i = 1, ... ,NA :
€i = €f + €f (18)
AiO'i = Ai( O'f + O'f) (19)
where
€j and O'j are strains and stresses in the modified structure due to the external load,
€f and O'f are strains and stresses in the original structure due to the external load,
€f and O'f are strains and stresses in the distorted structure (caused by virtual distortions),
Ai and Ai are cross-sectional areas in the original and modified structure, respectively.
Strains €f and stresses O'f are defined as follows
NA
€f = LDij€j (20)
j=l
NA
O'f = Ei L(Dij - 8ij )€j (21)
j=l
where
€j are virtual distortions,
Dij is the i,j th element in the influence matrix describing the deformation of the member
"i" caused by the virtual distortion €j = 1 of the member "j",
8ij is the Kronecker symbol and
Ei are elastic material properties.
Substituting (20) and (21) into (18) and (19) gives respectively
NA
€i = €f + LDjj€j (22)
j=l

NA
-
AjO'j L
= Aj[O'j + Ej L...J(Djj -
~
8jj )€j]
j=l
NA
= AjEj(€j - L8jj€j) (23)
j=l
143

Modifications of stresses and deformations caused by modifications of design variables can be


determined through the virtual dis torsions €i, i = 1, 2, ... , N A defined by the following simulation
problem obtained by eliminating fi from (22) and (23)

NA
LBij((i)€i = -(iff (24)
j=l

where
Bij = (iDij + Dij
(i = (Ai - Ai)/Ai

Calculation of the components in the matrix D requires N A solutions of a finite element problem

(25)

where p(~) = f( €j = 1) is the load vector corresponding to the virtual distortion €j = 1 of member
"j" and V(j) is the vector of nodal displacements in the global coordinates. Denoting by d(ij) the
corresponding vector of local displacements in the ith member. The components of the influence
matrix D can now be calculated from

D .. _ d2 (ij) - d 1 (ij)
IJ - Ij (25)

where I j is the length of member j.


Updated deformations and displacements are estimated from (22) and

(27)

Based on a finite difference scheme the gradients can be estimated from

oV ~V(j)
(28)
oAj ~Aj

db i obi obi T oV
- = -OZj
dAj
+-= -
oV oAj
(29)

The computational costs to estimate £t J


in connection with a first order optimization algorithm
(such as NLPQL or VMCWD) are
Initially: (NA + 1) solutions of FEM problem (large problem) - equation (25).
At each iteration: (N + 1) solutions of simulation problem (small problem) - equation (24).

It should be noted that generally Ii is a full matrix whereas the stiffness matrix K usually is
banded.
Performing the sensitivity analysis based on the VDM concept is particulary favourable when the
number of modified elements N A is relatively small with respect to the number of all structural
elements. This advantage is growing if the sensitivity calculations need to be repeated many times
in the optimization process.
144

5. Example

31.65m
~1000

s SWL
~
N
N

S
"'<ci
-
S
N ~
r-

/.

51.15m

Figure 1. Plane truss model of jacket structure.

A plane truss model of an offshore jacket structure is considered. The structural system and the
loading are shown in fig 1. Section properties of the truss elements are shown in table 1.

group i elements Area Ai


1 1,4 0.2476
2 2,3,5,6 0.1847
3 7 0.6283
4 8,9 0.1083
5 10,11 0.07658
6 12,13,14,15 0.1008
7 16,17,18,19 0.1291
Table 1. Section properties.
145

element i iT. - eq. (7) iT. - VDM


1 33.69 33.66
2 -35.50 -35.47
3 34.66 34.63
4 33.69 33.66
5 -35.50 -35.47
6 34.66 34.63
7 53.78 53.73
8 -60.27 -60.21
9 -60.27 -60.21
10 -67.51 -67.44
11 -67.51 -67.44
12 60.66 60.60
13 60.66 60.60
14 68.02 67.96
15 68.02 67.96
16 -59.82 -59.76
17 -59.82 -59.76
18 -66.91 -66.85
19 -66.91 -66.85
Table 2. Sensitivity coefficients.

In table 2 sensitivity coefficients of the member forces £tare shown for all structural elements
with respect to changes in the cross- sectional area of the elements in group 2. The" classical"
method - eq. (7) is compared with estimates obtained with VDM using ~Ai = A;/lOOO.
Some small deviations in the sensitivity coefficients are obtained, but generally there is a good
agreement between the results obtained by the two methods.
In this example N K = 18. If all N = 7 groups of elements are modelled as design variables
then NA = 19. The "classical" method for estimating £t
requires (N+1)=8 solutions of FEM
equations with N K = 18 unknowns, whereas the VDM tec~ique requires at each iteration N + 1 =
8 solutions of the simulation problem with N A = 19 unknowns.
However, if only group 2 is considered as design variable then N = 1 and NA = 4. The "classical"
method requires 2 solutions of FEM equations with 18 unknowns, whereas the VDM technique
requires at each iteration 2 solutions of simulation problem with only 4 unknowns.

6. Conclusions
The following conclusions can be stated :
• It has been demonstrated that the Virtual Distortion Method can be succesfully applied to
estimation of displacement sensitivity coefficients (estimates of stress sensitivity coefficients
can also very easily be determined).
• The most costly computation is concerned with influence matrix D. In most cases the
following iterative procedure requires much less computer effort.
• Which of the three methods for estimating sensitivity coefficients is the most effective from
a computational point of view depends on the actual values of NK, N, m and NA.
146

• It is interesting to note that the influence matrix in the VDM approach is a function of
elastic properties and element topology of the structure. It can be calculated employing
either the displacement or the force method, and thus the VDM approach can be associated
with practically any existing software for truss or frame analysis.

7. References
[1] Schittkowski, K.: NLPQL : A FORTRAN Subroutine Solving Constrained Non-Linear Pro-
gramming Problems. Annals of Operations Research, 1986.
[2] Powell, M.J.D. : VMCWD : A Fortran Subroutine for Constrained Optimization. Report
DAMTP 1982/NA4, Cambridge University, England, 1982.
[3] Madsen, H.O., S. Krenk & N.C. Lind: Methods of Structural Safety. Prentice-Hall, 1986.
[4] Haftka. R.T. & H.P. Kamat : Elements of Structural Optimization. Martinus Nijhoff
Publishers, Dordrecht, 1985.
[5] Holnicki-Szulc, J. & J. T. Gierlinski : Structural Modifications Simulated by Virtual Distor-
tions. Int. J. Methods Eng., Vol. 20, pp. 645-666, 1989.
[6] Gierlinski, J.T. & J. Holnicki-Szulc : Progressive Collapse Analysis of Frames Using the
Virtual Distortion Method. In preparation.
[7] Holnicki-Szulc, J. & J.T. Gierlinski : Optimization of Skeletal Structures with Material
Nonlinearities. Proc. of the First Int. Conf. on Computer Aided Design of Structures,
Southampton, U.K., June 1989, pp. 209-220.
[8] Holnicki-Szulc, J. : Optimal Structural Remodelling - Simulation by Virtual Distortion
Method. Comm Applied Num. Methods, 1989.
[9] S!<'lrensen, J.D.: Probabilistic Design of Offshore Structural Systems. Proc. 5th ASCE Spec.
Conf., Virginia, 1988, pp. 189-193.
[10] S!<'lrensen, J.D. & P. Thoft-Christensen : Structural Optimization with Reliability Con-
straints. Proc. 12th IFIP Conf. on System Modelling and Optimization, Springer Verlag,
1986, pp. 876-885.
[11] S!<'lrensen, J.D. : Reliability Based Optimization of Structural Elements. Structural Relia-
bility Theory, Paper No. 18, The University of Aalborg, 1986.
RELIABILITY OF DANIELS SYSTEMS WITH LOCAL LOAD
SHARING SUBJECT TO RANDOM TIME DEPENDENT INPUTS

Mircea Grigoriu
Cornell University
Hollister Hall, Ithaca, NY 14853, U.S.A.

INTRODUCTION

Daniels systems consist of n parallel brittle fibers and can carry load in
damage states m = n, n-l, ... , 1 characterized by m unfailed fibers and n-m failed
fibers (Fig. 1). It is assumed that the distribution of the applied load among
fibers presents concentrations in a vicinity of failed fibers (local load sharing
rule) consistent with the stress distribution observed in composite and fiber-
reinforced materials (9). Most studies on the reliability of Daniels systems
involve elementary loading conditions, e.g, time-invariant and monotonic loads
(8,10). Dynamic loads began to be considered recently for Daniels systems with the
equal load sharing rule (1,2,3,4).

m unfoiled n-m failed


fiber fibers

Figure 1. Daniels Systems with n Fibers in Damage State m < n Subject


to Load Process X(~).

Consider a dynamic Daniels system with n fibers of random resistances (Ri), i


- 1, 2, ... , n, that is subject to a load process X(~), ~ ~ 0. The duration of
residence in a damage state m depends on the load sharing rule, probabilistic
characteristics of fiber resistances, dynamical properties of the Daniels system,
and features of applied load. Let PS(~) be the probability (reliability) of system
survival in (0, ~) and

(1)
148

the corresponding failure probability. The system survives as long as there is at


least one unfailed fiber.

This paper develops a technique for estimating and bounding probability PF(T)
for Daniels systems with an elementary local load sharing rule and a relatively
small number of fibers. The technique is based on crossing characteristics of a
generalized Slepian's process, probability bounds for systems, and first and second
order reliability methods (FORM/SORM). The paper is an extension of recent work by
the author on dynamic Daniels systems with equal load sharing rule (4). It is
assumed that (i) resistances {Ril, i - 1, ... , n, are independent identically
distributed random variables; (ii) spatial distribution of fibers is uniform on
locations (l, 2, ... , nl. A particular spatial distribution of fiber resistances
is referred to as spatial configuration; (iii) load carried by a fiber prior to
failure is redistributed evenly between the nearest surviving fibers. Other more
complex load sharing rules can be considered (9). They can be directly incor-
porated in the proposed technique for the reliability analysis of dynamic Daniels
systems; (iv) X(T), T ~ 0, is a quasistatic Gaussian load process; and (v) fibers
are in tension with nearly unit probability and fail when load effect exceeds for
the first time fiber strength. Other failure conditions can also be incorporated
in analysis, e.g., failure caused by fatigue.

Numerical results for probability PF(T) and bounds on this probability are
presented for Daniels systems with n = 3 of deterministic and exponentially dis-
tributed resistances that are subject to a quasistatic uniformly modulated
Gaussian load process.

FAILURE PATHS AND PROBABILITIES

The fiber spatial configuration can significantly affect damage evolution and
reliability under local load sharing rules because stronger fibers can be subject
to load concentrations causing their failure prior to the failure of weaker fibers.
Thus, various spatial configurations may generate different failure paths depending
on the relative magnitude of fiber resistances and load concentra'tion factors.

Failure Paths
" <"R2 < ... < Rn
Let Rl " be the order statistics of fiber resistances {Ril, i
1, ... , n, (5). There are n!/2 distinct spatial configurations of these resis-
" " 1\ A 1\ 1\

tances because, e.g., configurations {Rl, R2, ... , Rn-l' Rnl and {Rn , Rn-l' ... ,
"
R2, "
Rll are indistinguishable from a mechanical viewpoint. Each spatial config-
"
uration of order resistances {Ril, i - I , 2, ... , n, can result in one or more
failure paths depending on the relative magnitude of the resistances. Figure 2
shows spatial configurations of fiber resistances and failure paths for a Daniels
149

system with n - 3 fibers. The probability of individual spatial configurations is


1/3 when fiber location is assumed to follow a uniform distribution. Note that
load carried by a fiber at the very left or right of the system is transferred
fully to the only available neighbor at failure. The number of spatial configu-
rations and failure paths increases rapidly with n. For example. there are ns =

n!/2 - 12 distinct spatial configurations of fiber resistances for a Daniels


system with n - 4 fibers.

Probability of Failure
Let qs be the number of failure paths associated with spatial c0gfiguration s
- 1. 2 •...• ns The total number of distinct failure paths is q - ES q. Since
s-l s
failure paths are disjoint events. probability of failure PF(~) in Eq. 1 can be
obtained from

(2)

in which Pj - the probability of occurrence of failure path j. J - the set of


distinct failure paths. and PF.j(~) - the probability of failure of a Daniels
system given the occurrence of failure path j. Failure path probabilities (Pj) can
be obtained from the probability of spatial configurations and of failure paths
given a spatial configuration. Conditional failure probabilities PF.j(~) can be
obtained from

PF .(~)
.J
- P [ r
m-l
Y
m.J
. < ~ ] (3)

in which Ym.j - the duration of damage state m along failure path j. Methods for
calculating these probabilities are examined in the next section.

Exact determination of failure probability PF(~) in Eq. 2 becomes impractical


for systems with many fibers due to the very large number of failure paths that
must be investigated. To reduce computation bounds on PF(~) can be developed to
evaluate the performance of Daniels systems (11). Let J' be any subset of J. From
Eq. 2.

E
jcJ'
PF.J·(~) Pj (4)

constitutes a lower bound on PF(~)' Consider a failure path jcJ and a damage
state mj - n. n-l •...• 1. Let
150

(5)

Then,

n
Y (6)
m,j

because (Ym,j) are nonnegative random variables. From Eq. 2,

(7)

constitutes an upper bound on PF(~) for mj - n, n-l, ... 1. This bound involves
all incomplete failure paths of the system.

FAILURE PATH PROBABILITY

Consider a quasistatic load process X(~) and let

Xm,j(t) - X(Tm+l,j + t) (8)

is the resultant force acting on the unfailed fibers of a Daniels system in damage
state m of failure path j. Time t originates at the beginning of damage state m
and random variable

n
Tm+l,j - r Yq,j (9)
q-m+l

denotes the total duration of damage states n, n-l, ... , m+l along failure path j.
Second-moment characteristics of Xm,j(t) follow directly from corresponding
descriptors of applied load X(~) and Eq. 8 for any given value of random time
Tm+l,j'

Critical Thresholds
Consider a damage state m of failure path jcJ o£ a Dajiels system subject to
quasistatic load X(~). Let
.0.
~,j' 0 <
0-1 m,
.0.
~,j ~ !
and r a j = I, be the load
concentration factor for a fiber of strength R.o. in Eh1s damage state. For example,
A A

these factors are 2/3 and 1/3 for the fibers of strength R2 and R3 of damage state
A A A

m - 2 and spatial configuration (Rl, R2, R3) of the Daniels systems in Fig. 2. A
151

fiber fails when load process am~j X(Yn,j + ... + Ym+l,j + t), t ~ 0, exceeds for
A

the first time Rl.

n
The probabilities of random variables Y j and E Y . depend on character-
m, m-m. m,J
istics of Xm,j(t) and ~m,j for m - n, n-l, ... , mj and ~re determined in the next
two sections. Thus, joint probabilistic characteristics of critical thresholds

m=3 m =2 m=1

m ill
X X X
"3 "-"3 "3 I .) 3 I I "-
R, R2 R3 I R2 R3 ¢ I I R3
x
I I
~X h +x
(a) Spatial Distribution {R I , R2 , R3 }

m =3 m =2 m =I

X X
'! "3 3'
X
,,I 2x X
'3 "3
"R, "R3 "R2 "R3 "-R2
I J I I

~X ~x

(b) Spatial Distribution


.
{R I , R3 , R2 }

m=3 m= 2 m=I

X
3
"R2
I

(c) Spatial Distribution


.
{R 2 , RI , R3 }

Figure 2. Distinct Spatial Configurations of Order Resistances and


Failure Paths for a Daniels System with n - 3 Fibers.
152

~m,j are needed for estimating the reliability of ~aniels systems and can be
obtained from probabilities of order resistances {Ril, i - I , 2, ... , n.

Let f and F be the common probability and distribution of resistances {Ril, i


1\ 1\ " A A A

- 1, 2, ... , An. AThen, th~ probabilities of Ri; {Ri' Rjl, i < j; {Ri' Rj' Rkl, i <
j < k; and (Rl, R2, ... , Rnl are, respectively,

fA ( ) n! [F(U.)]i-l [1 - F(u.)]n-i f(ul..) ,


Ri u i - (i-l)!(n-i)! l. l.

fA A ( u ) _ n! [F(u.) ]i-l [F(u.) _ F(u.)] j-i-l


Ri,Rj u i ' j (i-l)!(j-i-l)!(n-j)! l. J l.

* [1 - F(u.)]n- j f(u.) f(u J.),


J l.
(ui < Uj)

n' i-I
fii,ij'~(Ui' u j ' ~) - (i-l)!(j-i-l)!(k-j-l)!(n-k)! [F(ui)J

* [F(u j ) - F(Ui)]j-i-l [F(~) - F(Uj)]k-j-l [1 _ F(~)]n-k

* f(u.) f(u.)
l. J
f(~),
k

n
i (u l ' u 2 ' ... , un) = n! TI f(~) (10)
n k=l

These probabilities can be used to calculate conditional fiber resistances. For


example, probabilities of conditional variables R2 IRI and R31R2' Rl are

(11)

Residence Periods
Consider a differentiable Gaussian process Vet), t ~ 0, with mean and variance
functions ~ - E Vet) and ~(t, s) - E(V(t) - ~(t))(V(s) - ~(s)) satisfying initial
conditions (V(O) = 1]', V(O) = \l. The mean 1]-upcrossing rate of Vet) I (V(O) = 1]',
V(O) - \l can be obtained from Rice's formula and has the express [6]
153

(12)

in which

~(u) - u t(u) + ~(u)

~(u) - (2n)-1/ 2 exp (-0.5 u 2 )


u

t(u) - J~(a) da

S12 t
1\ - b t +--'-
S22,t
(,., - at)

2
St - Sn,t - S12,t l S 22,t (13)

and

a2., -1
a., a2., a.,
[atas 1 [::as at (0,0) <: - i.t( 0 ) 1
1 [
+ (t,O) at (t, 0) (0,0)

a., as
as (t,O) .,(t,O) (0,0) .,(0,0) ,.,' }l(0)

a., (t t)
at '

.,(t,t)
1-
a2., a., a2., a.,
1[ ::a.
-1
_ [ "". (, ,0' at (0,0)
" (',0, (0,0'

a.,
as (t 0)
' .,(t,O) as (0,0) .,(0,0) 1
2.,a a., (t,O)
as
x [ ,,,. (,,0)
(14)
a.,
at (t,O) .,(t,O) 1
The probability of first passage time T of V(t) relative to set (-~, ,.,),
,., > ,.,', can be approximated by
154

( -Jv(~;
t

P(T > t) '" exp s) ds ) (15)


o

provided that ~ is large and the difference between ~ and ~' is not small. The
approximation is based on the assumption that exceedances of threshold ~ follow an
inhomogeneous Poisson process of intensity v(~; t). It can be shown that the slope
of V(t) at time t of a ~-upcrossing follows probability (4)

g(\I~) = - - - - - (16)

in which f(\I~) in the density V(t) conditional on V(t) - ~, or

(17)

where

e = ;(0) + [ ~~ (0,0) / -y(O,O) ] [ ~ - ).1(0) ]


(18)
82-y [ ]2
-y - 8t8s (0,0) - ~~ (0,0) / -y(O,O)

Consider response process Xm,j(t), t ~ 0, of a Daniels system in damage state


m of failure path j. The process satisfies the initial conditions Xm,j(O) -
~m+l,j and Xm,j(O) = Zm+l,j(t) at the time of a ~m+l,j-upcrossing. Thus, proba-
bilities of the duration of damage state m and the slope of response Xm,j(t) at the
end of this damage state can be obtained from Eqs. 15 and 17 for specified values
of ~m+l,j; Zm+l,j; and ~m,j' This observation is used to calculate probabilities
of failures for failure paths and systems.

Probabilities PF,j(T:mjl
Consider a spatial fiber strength configuration and a failure path jcJ
consistent with this configuration. Suppose the system's initial state
(Xn,j(O), Xn,j(O») is a random vector and let Fl(x) = P(Xn,j(O» < x) and F2Il(~lx)
= P(Xn,j(O) < ~ I Xn,j(O) - x). The probability of the residence period Yn,j con-
155

ditional on Xn,j(O) - x, Xn,j(O) - X, and ~n,j - ~n can be obtained from Eq. 15 in


which v(~; ~) is replaced by the mean ~n-upcrossing rate of Xn,j(t) at t > 0 con-
ditional on Xn,j(O) - x, Xn,j(O) - X, and ~n,j - ~n. First, Eq. 17 is used to
find the probability of random variable Zn,j conditional on [Xn,j(O) - x, Xn,j(O) -
;, ~n,j - ~n' Yn,j - Yn,j)· Then, an approximation can be obtained from Eq. 15 for
the distribution of Yn-l,j I [Xn,j(O) - x, Xn,j(O) - X, ~n,j - ~n' Yn,j - Yn,j'
Zn,j - Zn,j' ~n-l,j - ~n-lJ· The process is repeated until all probabilities of
variables Ym,j' m - n, n - I, ... , mj' are determined.

Let F~ .; Fy ; and FZ be the probabilities of conditional variables


m,J m,j m,j.
~m,jl~n,j' ... , ~m+l,j; Ym,jIXn,j(O), Xn,j(O), ~n,j' Yn,j' Zn'j' ... , Zm+l,j' ~m,j;
and Zm,jIXn,j(O), Xn,j(O), ~n,j' Yn,j' Zn,j' ... , Zm+l' ~m,j' Ym,j' respectively.
Denote by T the distribution of the standard Gaussian variable with mean zero and
unit variance. Consider the change of variable from space [Xn,j(O), Xn,j(O), ~n,j'
Yn,J·' Zn,J·' ... , Z 1·' ~ ., Y .J to space (U l , U2 , ... , U(' 1) IJ of
mj + ,J mj'J mj,J n-mt +
independent standard Gaussian variables (12). Let Y . - h .(Ul , U2 , ... ,
m,J m,J
U3(n-m+l)+1) expression of residence period in damage state m of failuKe path j in
terms of Gaussian variables [UkJ. Thus, probability PF .(T, m.) - P( E Y . < T)
,J J m=m. m,J
can be evaluated in the standard Gaussian space [U l , U2 , ... J from J

PF .(T; m.) - P(gm.(!D ~ 0) (19)


,J J J

where

(20)

is interpreted as a limit state. Probability PF,j(T; mj) in Eq. 19 can be obtained


approximately by, e.g., first and second order reliability methods (FORM/SORM).

APPLICATIONS

Daniels systems with n - 3 fibers of deterministic and random exponentially


distributed resistances are examined. Estimates of and bounds on the probability
of failure of these systems are estimated for quasistatic nonstationary Gaussian
load process

X(T) - d(T + c) X*(T) (21)

in which c - 0.1, d(a) - 1 - e-pa, p - 5, and X*(T) is a stationary Gaussian


156

process with covariance function (1 + wl~l) e-wl~l. w = 1.0. and mean EX*(~) -
5.0.

There are three distinct spatial configurations for the system:


1\1\/\ AAA "AI\.

(1) {Rl. R2. R3J; (2) {Rl. R3. R2J; and (3) (R2. Rl. R3J.

Deterministic Fiber Resistances. Let (a) ~l - 1.25; ~2 - 3.00; ~3 - 5.00 and


(b) ~l = 1.00; ~2 - 2.50; ~3 - 5.00 be two sets of deterministic fiber resistances.
Figure 3 gives path and system failure probabilities PF.j(~) and PF(~)' Note that
equal load sharing rule. Results show that failure paths can have very different
failure probabilities and that the relationship between the reliability of Daniels
PF• I (TI
1.0

system failure probability


0.8 under equol load
shoring rule ' "

0.6
PF( T 1= system failure
probability

8000 16000 24000 32000


time, T

1.0

system failure probability


0.8 under equal load
shoring rule

0.6

PF(T)' system failure


probability
0.4

\'r1=1.00; 'rz a 2.50; 'r3 8 5.00 I (0)


PF• Z(T)

1000 2000
time, T

Figure 3. Path and System Failure Probabilities for Daniels Systems


with n = 3 Fibers of Deterministic Resistances.
157

systems under local and equal load sharing rules can significantly change in time
depending on fiber resistances. Figure 4 shows bounds on PF(~) based on the most
vulnerable spatial configuration and residence periods Yn,j. The first order
bounds are wide. Higher order lower bounds nearly coincide in this case with the
exact result because PF,2 = o.

Random Fiber Resistances. Let resistances (Ri) be independent identically


distributed exponential variables with probabilities fR.(r) - h e-h(r-a), r> a,
~

where a - I and (a) h - I and (b) h - 5. Figure 5 shows failure probabilities


PF,j(~) for all failure paths of the system (see Fig. 2). Failure is more likely

v~~er bound
1.0
\Y3<T'
0.8
I. .~-1-=-1.-25-;-~2-=-3-.0-0-;-~-3-=-5.-0-0--'1 (0)

0.6
PF(T' =system failure
probability

lower bound
-S-PF,I(T)

600 1200 1800 2400 3000


time, T

1.0

0.8

0.6

0.4

0.2 lower bound


-! PF,I (T)
o0~J..."-60~0,,..-L"""12~0""'0..L..,-,18,.L:0~0~24-,100"'"0"""'""3~000
time, T"

Figure 4. Bounds on Probability of Failure of Daniels Systems


with n - 3 Fibers of Deterministic Resistances.
158
10 } ~- 5
'-PF,llrl; PF,3Irl
0.8
} ~-I
0.6 PF,llrl

0.4

( 0)

80 120 160 200


time, T

1.0 ~-5

0.8

0.6

0.4

0.2 ( b)

0
0 40 80 120 160 200
time, T

1.0 ).'5

0.8 PF,u(rl 9;_ R3 >2R Z

).. I
0.6

0.4

(e)

10 120 160 ZOO


time, 'I'

Figure 5. Failure Path Probabilities for Daniels Systems with n - 3 Independent


Exponentially Distributed Fiber Resistances.

when h = 5 because the probability of high fiber resistances is smaller in this


case. Results in the figure have been used in Fig. 6 to determine and bound system
failure probability PF(~) based on Eqs. 2, 4, and 7. Bounds based on failure times
Yn,j and the failure probability corresponding to the most vulnerable spatial con-
figurations are wide. However, upper bound P(Y3 < ~) provides a useful conser-
vative approximation of PF(~)'
159

1.0
~ (0)
0.8

0.6
' - P (T) =system failure
probability
0.4

0.2
lower bound
1PF• 1(T)
0
0 40 80 120 160 200
time, T

1.0
I >.= 51 ( b)
0.8
PF (T) =system failure
probability
0.6

lower bound
0.2
t PF, I (T)
40 80 120 160 200
time, or

Figure 6. Bounds on Probability of Failure of Daniels Systems with n = 3


Independent Exponentially Distributed Fiber Resistances.

CONCLUSIONS

A general method was developed for calculating failure probabilities and bounds
on these probabilities for Daniels systems with brittle fibers of random resis-
tances that are subject to quasistatic and dynamic loads. The analysis involves a
simple local load sharing rule for representing stress concentration and an ele-
mentary failure criterion for fibers. Concepts of extreme theory of stochastic
process and generalized Slepian models were used in the developments of the paper.
160

Proposed method was illustrated by reliability analyses of Daniels systems with


n - 3 fibers of deterministic and random resistances subject to a nonstationary
quasistatic Gaussian load processes. Results show that the proposed upper bounds
can be used to provide a simple conservative estimate of system failure
probability.

REFERENCES

1. Fujita, M., Grigoriu, M., and Rackwitz, R., "Reliability of Daniels Systems
Oscillators Including Dynamic Redistribution," Probabilistic Methods in Civil
Engineering, ed. P. D. Spanos, ASCE, NY, 1988, pp. 424-427.

2. Grigoriu, M., "Reliability of Fiber Bundles Under Random Time-Dependent


Loads," Lecture Notes in Engineering, Springer-Verlag, New York, 1987, pp. 175-
182.

3. Grigoriu, M., "Reliability of Daniels Systems Subject to Gaussian Load


Processes," Structural Safety, Vol.6, 1989, pp. 303-309.

4. Grigoriu, M., "Reliability of Daniels Systems Subject to Quasistatic and


Dynamic Nonstationary Gaussian Load Processes," Probabilistic Engineering
Mechanics, Vol. 4, No.3, Sept. 1989, pp. 128-134.

5. Larson, H. J., Introduction to the Theory of Statistics, John Viley & Sons,
Inc., New York, 1973.

6. Leadbetter, M. R., Lindgren, G., and Rootzen, H., Extremes and Related
Properties of Random Sequences and Processes, Springer-Verlag, New York, 1983.

7. Lin, Y. K., Probabilistic Theory of Structural Dynamics, Robert E. Krieger


Publishing Company, Huntington, NY, 1976.

8. Phoenix, S. L., "The Stochastic Strength and Fatigue of Fiber Bundles,"


International Journal of Fracture, Vol. 14, 1978, pp. 327-344.

9. Phoenix, S. L., and Smith, R. L., "The Strength Distribution and Size Effect in
a Prototypical Model for Percolation Breakdown in Materials," Technical Report
43, Mathematical Science Institute, Cornell University, Ithaca, NY, April 1989.

10. Taylor, H. M., "The Time to Failure of Fiber Bundles Subject to Random Loads,"
Advances in Applied Probability, Vol. II, 1979, pp. 527-541.

11. Thoft-Christensen, P., and Murotsu, Y., Application of Structural Systems


Reliability Theory, Springer-Verlag, New York, 1986.

12. Ven, Y. K., and Chen, H.-C., "On Fast Integration for Time Variant Structural
Reliability," Probabilistic Engineering Mechanics, Vol. 2, No.3, 1987, pp.
156-162.
RELIABILITY ANALYSIS OF ELASTO-PLASTIC DYNAMIC PROBLEMS

Toshiaki Hisada*, Hirohisa Noguchi*, Osamu Murayama* & Armen Der Kiureghian**
*Research Center for Advanced Science and Technology
University of Tokyo, Japan
**Department of Civil Engineering
University of California, Berkeley, USA

ABSTRACT

A sensitivity analysis method for elasto-plastic dynamics


problems is developed in the context of incremental finite
element scheme. A finite element code on the proposed method
is tied with reliability analysis code, CALREL, and crossing
reliability of an elasto-plastic truss structure is analyzed.

1. INTRODUCTION

In probabilistic finite elements and reliability analysis, there is


need to compute gradients of structural responses with respect to
uncertain parameters that define material properties, structural
geometry or loads. A general formula for the response gradient of
nonlinear structures was given by Ryu et al. in 1985 [1). Liu et al.
[2,3) used this formulation in conjunction with the first and second-
order perturbation method to perform second-moment probabilistic
analysis of static and dynamic problems. Liu and Der Kiureghian [4,5)
applied the same formulation in conjunction with the first and second-
order reliability methods (FORM and SORM) to geometrically nonlinear
static problems, where for the purpose of accuracy and efficiency they
derived explicit expressions for the derivative matrices at the element
level and coded them in a general-purpose finite element program.
The first and second authors [6,7,8) extended the general result of Ryu
et al. to static nonlinear problems where the response is dependent on
the path of loading, and developed a perturbation scheme for efficiently
computing the response gradients.
162

The objectives of the present paper are: (a) to develop a method


for computing the response gradients for nonlinear dynamic problems,
which accounts for the path-dependency of the response, and (b) to apply
the methodology in conjunction with FORM/SORM to a time-variant
reliability problem. The first objective is achieved by the first three
authors. The reliability analysis is performed based on a formulation
suggested by the last author and employing the computer code CALREL [9]
developed by his group.

2. FORMULATION TO EVALUATE GRADIENTS OF STATIC SYSTEMS [6,7,8]

In elasto-plastic and/or geometrically nonlinear static problems


the governing finite element equation is given by the incremental form;

K dU =dF (1)

where the tangent stiffness matrix K consists of a linear strain term


KL and a nonlinear strain term KNL as Eq. (2) in the general case. Only
KL is used for small displacement elasto-plastic analysis.

K=KL+KNL (2)

Generally speaking the solution U depends on the load path taken to


reach the final value F. Therefore, it is theoretically correct to
carry out the infinite step incremental analysis (i.e., integral) along
the load path. In practice, however, a reasonable number of load steps n
is taken as

(3)

and an iterative scheme is applied to each load step Fj. If each load
step is small, the load path effect can be taken to be negligible during
each load step Fi. In the same way it may be assumed that the equation

ti KdU =Fi (4)


)Ui-l
holds ( i.e.,the integral always gives Fj) regardless of the displace-
ment path from Uj-l to Uj. Based on Eq. (4), it is possible to proceed as
follows.
Let the system parameter bj have variation dbj (= abj' Ial «1), and
the corresponding tangent stiffness be Ka. Following Eq. (4), we obtain
163

or

(5)

where .6.Ui is the total variation of Uj, given by .6.Ui =I5U i + ( higher order
terms ). From Eq. (5) we can derive the following equations, neglecting
the second and higher order terms.

(6)

or
(7 )

where

(8)

Based on the above equations, 15UI, 15U2,·· . ,BUn, and therefore,


I5Udl5bj '" dUddb j , I5U:Jl5bj '" dU:Jdb j , ••• , I5Un/l5bj '" dUn/dbj , are sequentially obtained.
Considering elasto-plastic problems, we note that I5Ri can not be
expressed by the integral of 15K, because 15K does not always exist.
Figure 1 shows the mechanical interpretation of the present method in
the case where loading Fl, unloading F2 and loading F3 are sequentially
given. It is seen that two vectors such as 1 and 2 cancel each other
after each loading step. It should be noted that the cancelation
mechanism does not work if an element (Gauss point) falls exactly in the
"transient state" between elastic and plastic due to the internal stress
equilibrium at the end of load ~.

3. FORMULATION TO EVALUATE GRADIENTS OF DYNAMIC SYSTEMS

It is straight forward to extend the above concept and formulation


to dynamic nonlinear problems. The equilibrium equation for dynamic
problems is given by
i
MVi + Qi = L FI ( 9)
1=1
where Vi and Qi are acceleration and internal force after ith load step.
The internal force Qi may be described as follows by tangential damping

r
and stiffness matrices.

Qi = ±(~iJl)ill-l
1=1
C dO +
)U/_l
K dU ) (10)

Because the tangent stiffness matrix K is often path dependent, the


above sequential integration form to evaluate Qi is substantially impor-
164

tant, but the damping term might be replaced by a single integral from
Uo to Ui due to it nature. The implicit time integration method requires
iteration scheme such as follows.
i
MtJ~) + C~U~k) + K~U~) = L Fl- Q~k-l) (11)
1=1

U~) = U~-I)
I I
+ au~)
I (12)

U~) = U~-I)
II.'=!.
+ AU~)
1 (13)

tJ~) = tJ~-I) + XtJ~k) (14)


I 1 - 1

where au~k), ~U~k) ,~tJ~k) are kth updating increments converging to zero _I t
is noted that ~ does not mean perturbation components.
When there is a perturbation in the system, the equilibrium equa-
tion is given as follows instead of Eq(9) _
i
Ma (tJ i + Ll tJd + Qai = L Fal (15)
1=1
where LltJ i is the total variation of the acceleration due to the pertur-
bation of the system which is denoted by subscript a. Based on Eq(10)
the perturbed internal force vector Qai is given by

Qai = ±. (lUI+AUI C a dU +lUI+AUI Ka dU)


1=1 ill-. +b.Ul-l U/-I +4Ul-l

. 1 lUI+AUI lUI+AUI

= ~( UI.I+AUI.I C a dU + UI.I+AUI.I Ka dU )

+ f . ", c. dU + E:.,~, K. dU + f·''c. f·"


dU + K. dU (16)

According to the method described in the previous section, the following


equations for the first order variations are derived.

(17)

i-I lUI+aUI lUI+aul


5Ri = L( Ca dU + Ka dU )

l
1=1 'Ii,... + au,.. UI-I + aul-l

Ui lUi i

+ u;'1+au;'1 C a dU + Ui.l+aUi.1 Ka dU - ~ Fal + MatJ i (18)

As for the implicit time integration scheme, any .suitable method


may be used. In the present paper the Newmark-~ method as given by
Eqs. (19) and (20) is used.
165

(19)

(20)

It is noted that At represents time step of integration. The components


with superscript (k) are renewed in the Newton-Raphson iteration. The
first order variational representation of Eqs. (19) and (20) due to the
system perturbation are given as follows.

su· = su· 1 + 12 (OU·,- 1 +OU·,)-dt


1 1· (21)

(22)

Substituting Eqs. (21) and (22) into Eq. (17), we have the following equa-
tion to solve for the variation of the acceleration.

(M + ~t C + ~ ~ t2 K) OUi
= - SRi - C (SU i_l + t~t SU i_l )
- K{ SUi-l + SUi-l ~t + (t-~) At2 OUi_d (23)

Because the effective stiffness matrix M+~C+~~t2K in the above is

the same as that of the original system, the variation SUi is easily ob-
tained after the completion of Newton Raphson iteration for the original
system. Then SUi and SUi are simply calculated by Eqs. (21) and (22).

4. RELIABILITY ANALYSIS OF ELASTIC-PLASTIC DYNAMIC TRUSS STRUCTURE

The sensitivity analysis method as formulated in the previous


section is applied to a dynamic analysis of elastic-plastic truss struc-
ture. Figure 2 shows an example model which is similar to that used by
W-K Liu et al. [3] except the loading. The sensitivity analysis program
is then combined with the reliability analysis program, CALREL [9], to
evaluate the safety of the system under the following three assumptions.

Case 1: The load is applied to node 1. The failure occurs when


the maximum value of the vertical displacement of node 2
exceeds its threshold Uo during t = 0 - 0.48 sec.
Case 2: The load is applied to node 3. The failure occurs when
the maximum value of the vertical displacement of node 2
exceeds its threshold Uo during t = 0 - 0.48 sec.
166

Case 3: The load is applied to node 3. The failure occurs when


the maximum value of the stress of element 1 exceeds its
threshold ao during t = 0 - 0.48 sec.

Figure 3 shows the mean response (the response when random vari-
ables are equal to their mean values) and the design point responses
(the responses for the most likely values of the random variables that
give rise to the failure event, at which point first or second-order
approximations to the limit-state surface are constructed) of the
displacement at node 2 for some Uo's in Case 1. Figure 4 shows similar
results of the stress response of element 1 in Case 3. Figure 5 shows
the sensitivity history of the stress response of element 1 with respect
to the yield stress in Case 3. The sensitivity histories as computed by
the present method are compared with those obtained by direct finite
difference method (FDM), and good agreement is seen. In fact the plots
of sensitivity by the direct FDM are practically identical with those in
Figure 5.
Figures 6 to 11 show the changes of reliability index and probabil-
ity of failure against the failure thresholds, Uo or ao, and the proba-
bility densities of each maximum values in the above three cases. These
values are evaluated by FORM and (point-fitting) SORM.
In these figures, results of FORM and SORM are very similar. And
the reliability indices are almost linear against threshold value in
Figures 8,11 and 14. These results suggest that the performance
functions and limit-state functions are almost linear in the standard
normal spaces.
Table 1 summarizes the scaled sensitivities, 0 and ~. 0 is defined
as Sa~/a~ and ~ as Sa~/as, where ~ and S are the mean value and standard
deviation of each random variable, respectively. 0 provides a measure of
the importance of the central value of each variable, whereas ~ provides
a measure of the importance of the uncertainty in each variable. It is
seen that the sensitivities with respect to the Young's moduli are
unimportant. On the other hand, the yield stresses of element 1 and 3 in
Case 1 and element 7 and 8 in Case 2 are most sensitive. This is easily
understood because these elements are most stressed. But in Case 3, it
is interesting to see that the most important variable is the yield
stress in element 7, but the yield stress in element 8 is not so
important. This may not be obvious from an intuitive standpoint.
In this study, only component reliability analyses are demon-
strated. However, even if failure is defined as the union of some
events, e.g., for U01, U02 and ao, a simple series system analysis by
167

CALREL provides bounds on the system probability with no additional


finite element analyses.

Table 1. Sensitivity Vectors


sensitivity vectors
var Case1, Uo=85.0 Case2, Uo=55.0 Case3, 0"0=21500.0
Q Il Q Il Q Il
El 0.0060 -0.0005 0.0011 -0.0001 -0.0206 -0.0011
E2 0.0007 -0.0001 -0.0006 0.0000 -0.0018 -0.0001
E3 0.0057 -0.0005 -0.0004 0.0000 -0.0205 -0.0011
E4 0.0009 -0.0001 -0.0006 0.0000 -0.0045 -0.0002
Es 0.0000 0.0000 -0.0009 0.0000 0.0025 0.0001
E6 -0.0001 0.0000 -0.0001 0.0000 0.0025 0.0001
E7 0.0035 -0.0003 -0.0008 0.0000 0.0005 0.0000
Es 0.0037 -0.0003 0.0028 -0.0001 -0.0140 -0.0006
E9 0.00l7 -0.0001 -0.0008 0.0000 0.0017 0.0000
E10 0.0024 -0.0002 -0.0004 0.0000 -0.0035 -0.0001
O"yl 0.5688 -1.0576 0.2614 -0.4452 0.1585 -0.2512
O"y3 0.5669 -1. 0531 0.2590 -0.4405 0.2989 -0.5132
O"y7 0.0651 -0.0920 0.4356 -0.8102 0.7257 -1. 5213
O"yS 0.0630 -0.0890 0.4263 -0.7892 0.1512 -0.2385
O"ylO 0.0638 -0.0901

5. CONCLUSIONS

A finite element sensitivity analysis method for nonlinear dynamic


systems is formulated. It is combined with a reliability software pack-
age, CALREL and an elasto-plastic analysis of a truss with material un-
certainty is exemplified. The proposed method is verified to be correct
through the numerical example.

REFERENCES

[1] Ryu, Y.S., Haririan, M., Wu, C.C. and Arora, J.S., " Structural
Design Sensitivity Analysis of Nonlinear Response," Computers and
Structures, Vol. 21, No. 1/2, pp. 245, 1985.
168

[2] Liu, W-K., Belytschko, T. and Mani, A., " A Computational Method for
the Determination of the Probabilistic Distribution of the Dynamic
Response of Structures," ASME PVP-98-5, pp. 243, 1985.
[3] Liu, W-K., Belytschko, T. and Mani, A., " Probabilistic Finite
Elements for Nonlinear Structural Dynamics," Computer Methods in Applied
Mechanics and Engineering, Vol.56, pp. 61, 1986.
[4] Liu, P-L, Der Kiureghian, A.," Reliability Assessment of
Geometrically Nonlinear Structures," Proc. ASCE EMD/GTD/STD Specialty
Conference on Probabilistic Methods, pp. 164, 1988.
[5] Liu, P-L, Der Kiureghian, A.," Finite Element Reliability of Two
Dimensional Continua with Geometrical Nonlinearity," Proc. 5th Interna-
tional Conference on Structural Safety and Reliability, pp. 1081, 1989.
[6] Hisada, T.," Sensitivity Analysis of Nonlinear FEM," Proc. ASCE
EMD/GTD/STD Specialty Conference on Probabilistic Methods, pp.160., 1988
[7] Hisada, T.,Noguchi, H., " Sensitivity Analysis for Nonlinear
Stochastic FEM in 3D Elasto-Plastic Problems," ASME PVP Vol. 157, Book
No.H00472, pp. 175, 1989.
[8] Hisada, T., Noguchi, H.," Development of a Nonlinear Stochastic FEM
and its application," Proc. 5th International Conference on Structural
Safety and Reliability, pp. 1097, 1989.
[9] Liu, P-L, Hong-Zong Lin, Der Kiureghian, A., Calrel User Manual,
UCB/SEMM-89/18, 1989.
169

1
i
oUI K,PU-g

T!
2 K(lh)Olh

t F2
l'
i
UZ
Ur+(ilh
K,PU -Pz

1
Fl
~ 2' K(Uz)Olh

1 0 U2 UZ+SU2 Ul UI+IlUIU3 U3+SU3


1"
i
U3
Uz+a.h
K,PU-&

2" K(U3)OU3

Figure 1. Mechanical interpretation of the perturbation method for sequential loadings

r- -+-360" 360"--J

T
360" t [sec]

1 Case1:
Loading Condition
P=100.0x1~ [lb], P2 =O.O[lb]
Case2,3: P=175.0x1~ [lb], PI =O.O[lb]
CJ
E 30.0x10 6 [lb/in' ] (mean value)
ET 30.0x10 4 [lb/in' ]
A 6.0 [in2 ]
p 0.30[lb/in 3 ]

CJy 15000.0 [lb/in'! ] (mean value)


Stress-Strain Relation

Specification of Probabilistic Variables

Ei(i=1-10) lognormal distribution


Coefficient of variation 0.05
Correlation Coefficients 0.5

oyj(Case1: j=1,3,7,8 and 10, Case2,3: j=1,3,7 and 8)


lognormal distribution
Coefficient of Variation 0.05
Correlation Coefficients 0.5

Figure 2. Example elasto-plastic truss structure


170

100 . 0

80.0

§ 60.0
'"
::J
40 . 0

20 . 0

0 .0
0.00 0 . 10 0 . 20 0 . 30 0.40 0 . 50
time(sec)

Figure 3 Mean and design point response of U2 (Case 1)

30000.0 ~---------------------------------,

mean
= 20500.0
(j()
~ 20000.0 00=22000.0
'"
~
6 10000.0

0 . 0 ~~~-.--~--.---~-.--~--.-~~-1
0.00 0 .10 0 .20 0.30 0 . 40 0.50
time(sec)

Figure 4 Mean and design point response of 0"1 (Case 3)

4.0 ~---------------------------------,

2 .0

-2.0

-4 . 0

-6.0 +---~~--~--~~~~--~--~~---1

0 . 00 0 . 10 0.20 0.30 0.40 0.50


lime(sec)

Figure 5 Sensitivity histories (Case 3)


171

6.0

cc..
4.0
.8oS
.... 2.0

~
:::I 0.0
FORM
SORM
~

-2.0
20.0 40.0 60.0 80.0 100.0 120.0
Displacement Threshold Uo[in]

Figure 6 Reliability Index vs Displacement Threshold

10 0

it 10-1 FORM
] 10-2 • SORM
·a 10- 3
....
~
0 10- 4

.~ 10- 5
g = Uo - max[U2(t)]
:=
~ 10-6 t=O-O.48[sec)
.c
J: 10- 7

10-8

20.0 40.0 60.0 80.0 100.0 120.0


Displacement Threshold Uo[in]

Figure 7 Probability of Failure vs Displacement Threshold

0.08

. . ...=
g =Uo - max[U2(t)]
Os
.... 0)
0.06
t=O-O.48[sec)

.;1
=-
~
O)p..
O·~ 0.04
~O
:;is
~I
.c . 0.02
°
It~
0.00
20.0 40.0 60.0 80.0 100.0 120.0
Maximum Displacement max[U2(t)] [in]
t=O-O.48[sec)
Figure 8 Probability Density of Maximum Displacement
172

8.0

co.. 6.0
><
~.... 4.0 g =00 - max[<Jl(t)]
1=O-O.48[sec]
:3 2.0
~ FORM
~ 0.0 • SORM

-2.04--r~--r-~-r~--~~~~~~~~~

18000 19000 20000 21000 22000 23000 24000 25000


Stress Threshold 00[lb/in2 ]

Figure 9 Reliability Index vs Stress Threshold

10 0
10- 1
it FORM
]
10- 2
10- 3
• SORM
.&1 g =00 - max[<Jl(t)]
10- 4
....0
~
10- 5
t=O-O.48[sec]

~
10- 6
10- 7
~
.c 10- 8
&: 10- 9
1cr1
18000 19000 20000 21000 22000 23000 24000 25000
Stress Threshold 00 [lb/in2 ]
Figure 10 Probability of Failure vs Stress Threshold

6. Ox1 0 4,-________________________________-,

-4
'c; 5. Ox10
~~
./il ~ 4.0x10
-4 g = 00 - max[<Jl(t)]
UfIl t=O-O.48[sec]
oe 3.0x10 4

f·12.0X10 4
.g
et
::s 1.0x10
-4

0.0i--r-,--r-,-~~--~~~~~~r_~~

18000 19000 20000 21000 22000 23000 24000 25000


Maximum Stress max[<Jl(t)] [lb/in2 ]
t=O-O.48[sec]
Figure 11 Probability Density of Maximum Stress
IDENTIFICATION OF AUTOREGRESSIVE PROCESS MODEL
BY THE EXTENDED KALMAN FILTER

Masaru Hoshiya & Osamu Maruyama


Department of Civil Engineering
Musashi Institute of Technology, Tokyo 158, Japan

1. Introduction

Much attention has been paid by many researchers to stochastic time


domain models such as autoregressive and moving average (ARMA),
autoregressive (AR) or moving average (MA) models that can be
used in the idealization of earthquake ground motions, since by these
models not only sample input waves but also response waves may be
recursively simulated and consequently it has been found that they are
effective models used in the field of stochastic dynamics [1 ]N[9].
The purpose of this study has been already investigated extensively
by many researchers, which is to identify an optimal time domain model
when ground motion records are given as sample realization.
However, apart from these studies [1]", [9], this study uses the
extended kalman filter-weighted global iteration method (the EK-WGI
method) [10],[11 ],[12] that is a least mean square optimization to
update recursively unknown parameters inherent in the time domain
model, and shows how simply and yet stably the model can be identified
that will represent ground motions stationary or nonstationary.
In this study, a multivariate and one dimensional AR model is
employed as the basic model to be identified for observed ground
motions.
174

2. Multivariate and One Dimensional AR Model

A multivariate and one dimensional autoregressive model is


expressed;

p
~ AiY(k-i)=BOX(k) (2•1)
i=O

Letting AO I ( unit matrix ), Eq.(2.1) becomes

P
Y(k)=-~ AiY(k-i)+BOX(k) (2.2)
i=l
The detail of this equation is;

a12(i) .... a1m(i]lY1 (k_i] [b ll


~22(~.).:.:.:;~2m(~) ?(k-~) + ~21
a m2(1) amm(l) Ym(k-l) b m1

where X(k) = [ X1 (k) X2 (k) •....• Xm(k) ]T ;( mx1 ) vector, and the
components Xi(k) are mutually independent with zero means and the
variances E[X i 2 (k)] = 1. In order that Eq.(2.2) is ready to use for
the Kalman filtering procedure of estimating the unknown coefficient
matrices Ai and BO' Eq.(2.2) is converted to

yT(k_i)
,,--- - -l
ali
"-
[" "- 0
p
Y(k)= - ~
I "- , yT(k-i) "-
"-
a2i +BOX(k) (2.4)
i=l I "- "-
I "- "- I
"- "- I
.yT(k_i)
I "- "-
I
L __
0 "- , '-.I
- --::.. ami

Where T a.Q, 1 (i) aU(i) .•. a.Q,m(i) ]T.


a.Q,i = Ai J.Q, [

It is noted that in Eq.(2.4) a.Q,i is a vector composed of the


unknown components of the coefficient matrix Ai' and that Eq.(2.4) may
be the basis for the state vector equation and the observation
equation which are relevant to the identification under consideration.
175

As to the determination of BO' the following derivation is prepared


(Ref.[l) and may be utilized.

P
L AiE[Y(k-i)yT(k») (2.5 )
i=O

3. state Vector Eg. and Observation Eg.

In order to apply the EK-WGI method to the identification problem


on the coefficient matrices Ai and BO' it is necessary to express the
state vector equation and observation equation such that they are
appropriate to the problem. Once these equations are stated, it is
possible to recursively estimate Ai and BO when observation data YQ,(k)
are processed to the extended Kalman filter. The detail of this
method is described elsewhere(10),[11),[12).

Identification of Ai .- The state vector equation is expressed in the


following way, considering the stationarity of the coefficient matrix
Ai and based on a random walk model.

Z(k)=[all a21 ••• aml/a12 a22 .•. am2/ .... /alp a2p ... ~p)T; at k
=[all a21··· a ml/ a 12 a22··· a m2/····/ a lp a2p···amp )T+<'i(k-l );at k-l
(3.1 )

where <'i(k) is a (mxmxp)xl vector in which the components are mutually


independent Gaussian white noises with zero means and small variances.
It is noted that the small variances are necessary in order to
maintain the stationarity.

The observation equation which expresses the relationship between


the observation vector and the state vector is derived based on
Eq. (2.4) ;

Z(k)+E(k)

(3.2 )
176

where Y(k) = [Y1(k) Y2 (k) ••.• Ym(k)]T


E (k) = [E 1 (k) E2 (k) •••• Em (k ) ] T
and Ei(k) = mutually independent Gaussian white noises with zero means
and arbitrary variances.

P /\ T
Identification of B O .- Let C = L A.E[Y(k-i)Y(k) ] and a lower
iel 1
triangular matrix So can be obtained by the Cholesky decomposition of
C as follows.

1)11 ~----,

BO b 21 ti22 " 0 I (3.3)


" 1

" "I
\I
bm1 b m2 ... b mm
where the values identified for Ai at the previous procedure and
observation data Y(k-i) are used in order to compose the matrix C.

Based on Eq.(2.5) and using the components of BO as your


observation, the observation equation may be given by

(3.4)

The state vector equation is

(3.5)

It is noted that a set of Eqs.(3.1) and (3.2), and a set of


Eq.(3.4) and (3.5) are used in parallel respectively for the
identification of Ai and BO. Instead of using Eqs.(3.4) and (3.5), '8'0
may be obtained directly by the decomposition following Eq.(2.5).
However, it is not wise to do so, since the observation data Y(k-i)
involves unknown noises in reality, and the solutions may deviate
substantially. In order to have stable solutions, therefore, Eqs.(3.4)
and (3.5) must be used in the process of the sequential optimization.

The above formulation is also valid even for nonstationary data


with the assumption of local stationarity as treated in Fig.1. In
this case, data centering around the time T are processed for k=1 to
/\ A
2r+1 to evaluate Ai at T. Then, together with this Ai' Y(k-i) are
used to obtain the matrix C which leads to BO in Eq.(3.3). NOw,
bij(k) may be identified by appling the EK-WGI method to Eqs.(3.4) and
(3.5). Next, data are similarly taken at T+~T where T = n~t; n~1
and the processing is carried out similarly to obtain ~i and ~O at
T+~T. ~i and ~O in between T and T+~T may be interporated linearly.
177

Fig.l Assumption of Local Stationarity

4. Numerical Examples

Numerical demonstrations are carried out by appling this method to


the seismometer array observation database. These records are observed
in Chiba Experiment Station of Institute of Industrial Science
University of Tokyo, Japan [13]. In this array, seismometers are
placed very densely both on ground surface and boreholes (Fig. 2). By
using this array observation database, numerical analyses are carried
out.

In application of the EK-WGI method, common number of global


iterations of 5 is employed. The initial values for the state vector
are assumed to be zero. The initial error covariances of the state
vector are assumed to be unity. 6(k) is not considered in numerical
analysis, thus assuming zero variances. The variances of E(k) are
0.001 in the diagonal array and zero otherwise. Furthermore, r=40 and
n=lO in the range of local stationarity.

Application 1.- A set of correlated earthquake acceleration records


taken during the Chiba-Toho-oki Earthquake(in 1982) are used to
identify a two variate and one dimensional AR(3) model. The earthquake
was of the magnitude of 6.7 and of the forcal depth of 58km.
Furthermore, location of the epicenter is Kujukuri Coast of Boso
Peninsula (Fig.3). The records were observed at the epicentral
distance of 45km. The EW components on the surface(GL-lm) of the
borehole CO and P6 are shown in Fig.4 in which the discrete time
interval is 0.01 sec and the duration is 39.0 sec.

Because of two variate processes, the total number of unknown


parameters is 15, and they are identified as shown in Fig.5. Since the
178

•P7
• P8 •
Fig.2 Layout of Complementary Observation System

CHIBA EXPeRIMENT
* tS::;I::JUKURI
~:) COAST
BOSO •
PENIN UlA

Fig.3 Map of Location of Recording Station and Epicenters

::0 ~~ARRAY DATA


MAX.~213.1
o
_0 ~~ARRAY DATA
MAX. 223.3
x ~ x ~ ~

6
u
u u
u
<:3 <:3

0.00 7.80 15.60 23.40 31.20 39.00 0.00 7·80 15.60 23.40 31.20 39·00
T I ME (SEC) TIME (SEC)

(a) CO (GL-1 m ) (b) P6 (GL-1m)

Fig.4 Chiba-Toho-Oki Earthquake


179

" PARAMETER alIO'> " PARAMETER dll(2) " PARAMETER all(s)

~:l~ ~:t--~-.
g
~~~~~~~~~
2
7~'~~~~~~~
1.10 "."0 TIME23.40(SEC) 15.60 a.~D
TlHE ISEC)
0.00 15.60 2]. 40
TIME ISEC)
lI.~O 3t.00

" PARAMETER aI2(1)

0.00 1.80 15.60 23. ~o 31.20 39.00 0.00 15.60 23.40 31.20 39.00 0.00 IS.60 23.40 31.20 39.00
TIME [SECJ TIME [SEC) TIME ISEC)

o • PARAMETER a2ICZ) • PARAMETER a21(3)

~: l~P\\ r~Mvel~ ~AAAA.~~


~:r V I
h .. N

9 ' ,
15.'0 23. to 1I.;;O 39.00 1.80 15.60 23.40 31.20 39.00 0.00 15.00 23.40 31.20 39.00
TIMe ISEC) TIME (SEC) TIME tSECI

" PARAMETER a22(3)


~:~"
",,"m, mm ; :PARAMETER
r== ="
a22(.2)

~o ~d
g ~ co

7· . 1.~O 15.60 23.4Q 31.20 19.00 O,.r,,~~,.~.-,~-~~~-~~~_


15.60 21.40 ]1.20 39.00 o.oa 15.60 21.,a H.20 19.00
TIME ISEC) TIME ISECl TIME (SEC)

g . PARAMETER bll

~:I~
Slo
, g

~~~~~~~~-
0.00 15.60 23.40
TiME ISEC}

" PARAMETER b21

0.00 1.80 IS.60 2].40 ]1.20 39.00


T (ME ISEC!

g • PARAME TER bZ1.

'~:I~AcNd
..Qo
'g
~~~~~~~~~~
0.00 1.80 15.10 23.40 H.20 31.00
TIME ISEC)

Fig.5 Identified Model Parameters


180

waves are correlated, the values of '8: ij and 'b ij for i"'j are observed
not to be zero.
Application 2.- Six acceleration records which were observed at
borehole CO (GL.-l m) during different earthquakes are selected (Fig.6)
and each record is used to identify the univariate and one dimensional
AR(3) model. These records may be classified into a same category,
since the epicenter of each earthquake is located in the vicinity of
Kujukuri Coast of Boso Peninsula. The identified model parameters are
shown in Fig.7 where the tendency of parameters of ~11 (1) and ~11 (2)
are almost identical, however, parameters ~O manifest different
evolutionary trends. This latter observation is clearly attributed to
different evolutionary amplitude trends of each earthquake.

5. Concluding Remarks

Adaptive identification problems on coefficient matrices of an


autoregressive model for multivariate and one dimensional
nonstationary Gaussian processes are investigated by appling the
Kalman filter incorporated with a weighted global iteration. The
method is applied to recorded earthquake accelerations and is proven
to be an effective and efficient method for the identification.

Physical interpretation on identified model parameters associated


wi th the wave propagation characteristics and ground geological and
geographical profiles is an interesting topic to be investigated in
future.

6. Acknowledgement

In performing this study, seismometer array data are offered from


Institute of Industrial Science, University of Tokyo. The authors wish
to thank Prof. T. Katayama and his research colleagues of University
of Tokyo for their development of array database.

Finally, it is noted that the theoretical formulation is developed


by the first author, while the numerical calculations are performed by
the second author.
181

.0
~o
,.;
)E)EARRAY DATA
MAX.= 213.1
o
o
)E)(ARRAY DATA )()E
MAX. = 22.5

'"
-'
<
<.:>0
_0
ci
u
U
<g
,.;
"i 0.00 4.60 9.60 14.40 19.20
0.00 7.60 15.60 23.40 31.20 39.00 24.00
Tl ME (SEC) TIME (SEC)

(a) Wave (d) Wave 4

g )E)EARRA Y DATA g )E)EARRAY DATA


MAX. = 17. I MAX. = 40. 6

::11~· ~: I· '~·fl~~.to.
u .
<0

o
o
"'.', j V' p, ...

"i ~
0.00 7.60 15.60 23.40 31.20 39.00 0.00 3.40 6.60 10.20 13.60 17.00
TIME (SEC) TIME (SEC)

(b) Wave 2 (e) Wave 5

o )E)EARRA Y DATA )E)E g )E)(ARRAY DATA


o MAX. = 23. 2 MAX. = 54. 0
~~
<.:>

o o
o o

~ ~
0.00 5.60 11.20 16.80 22.40 26.00 0.00 14.60 22.20 29.60 37.00
TIME (SEC) T I ME (SEC)

(e) Wave 3 (f) Wave 6

Fig.6 Sample Waves


182

PARAMETER a.. I II i) )II


PARAMETER (2.:"111.

:l+--_ _
J:jL
a.oo IS.IO n.to
TInE ISECI
~:L==
0.00 IS."
rUlE
n.lo
I~~.CI
11.10
TiME
I., . .
ISECI

PARAMETER (2.: .. 121 PARAMETER (L"121 M PARAMETER (2.:11121.

:lc=----v--
M

~~~
:~
;;~t-I----­

J:l ~~~~~~~~~~

IS.'O n.lo
15.10 U.IO
ilNE ISEC!
cl,l
~--~~--~
0.00 II.lO 1&.,0
flME ISEC)
fiNE ISEC)
P,'RAMETER (LU131 M PARAMETER a.1I131.
PARAMETE~ (2.:,,131 M

:1.4------ -
~:J\'~
1;~_~
~,

IS. ,a H.tO
15.'0
TINE
lJ.IO
(SECI
nnE ISECI
PARAMETER 170 PARAMETER 170
:1 i~
~.~

IS.'O u .• 1t
15.&0 n .• o 11.20 1'.10
TINE ISECI TlnE ISEC) TIME ISECI

(a) Wave (b) Wave 2 (e) Wave 3


P.'RAMETER (2.:,,111 M PARAMETER (L"III.

:+---1_ _
~~
~1 +-j- - - - - -

J:l~ dgr,~
'.6.1 \4 •• 0 ••• 0 10.10 It.'O U"O
TINE 15EC! TIME 15ECI i[Me: {SECI

PARAMETER (L'il21.

:V=~~
~: j
~~~~----
'.&0 It.la 0.00 '.'0 10.U 11.'0 U.l!t
nNE (SECI TIME ISEC) flME {SEC!

PARAMErER 0.,,131 M PARAMETER 0..,,(3))11

:l'L' _ _~ :1
g'rl~w-vJ'''''
<!g
..
~~-~-.~-.~-,--"-~~~lt.OO! ..
v
~:j~~"~
•.• 0 10.10 11.10 U.IO
TINE ISEC) TinE ISEC) r [ME 15ECI

PARAME TER bo PARAMETER 170 • PARAMETER 1:10

:~
....
g
~LrL
~: J
1 •• 0 .#_ ••. 10 11 •• 0 .... ·.10.10 11.'0 lZ.10
TINE 15ECI TitlE (SECI TiME ISECI

(d) Wave 4 (e) Wave 5 (f) Wave 6


Fig.7 Identified Model Parameters
183

7. References

1.Samaras, E., Shinozuka, M., and Tsurui, A., " ARMA Representation
of Random Processes ", Journal of Engineering Mechanics, ASCE,
Vol.l11, No.3, March, 1985.
2.Deodatis, G., and Shinozuka, M., " An Auto-Regressive Model for
Non-Stationary Stochastic Processes", Stochastic Mechanics Vol.II,
Columbia Univ., April, 1987, pp.227-258.
3.Nagamura, T., Deodatis, G., and Shinozuka, M., " An ARMA Model for
Two-Demensional Processes ", Journal of Engineering Mechanics,
ASCE, Vol.113, No.2, Feb., 1987.
4.Hoshiya, M., Ishii, K., and Nagata, S., " Recursive Covariance of
structural Responses ", Journal of Engineering Mechanics, ASCE,
Vol.l1 0, No.1 2 , December, 1984.
5 .Hoshiya, M., and Shibusa wa, S., " Response Covariance to Multiple
Exciations ", Journal of Engineering Mechanics, ASCE, Vol.112,
No.4, April, 1986.
6.Hoshiya, M., Naruyama, M., and Kurita, M., " Autoregressive Model
of Spatially Propagating Earthquake Ground Motion ", Proc. of
Probabilistic Methods in Civil Engineering, ASCE, Blacksburg,
Virginia, May, 1988.
7.Nau, R. F., Oliver R. M., and Pister, K. S., " Simulating and
Analyzing Artificial Nonstationary Earthquake Ground Motions "
Bulltin of Seism. Soc. of Am., Vol.72, No.2, April, 1982.
8.Takahashi, A., and Kawakami, H., " Estimation of Wave Propagation
by AR model", Proc. of the 43rd Annual Meeting, JSCE, October,
1988 (in Japanese).
9.Gersch, W., Taoka, G. T., and Liu, R., " Structural System
Parameter Estimation by Two-Stage Least-Squares Method ", Journal
of Engineering Mechanics, ASCE, EM5, October, 1976.
10.Hoshiya, M., and Saito, M.," Structural Identification by
Extended Kalman Filter ", Journal of Engineering Mechanics, ASCE,
Vol.ll0, No.12, December, 1984.
11.Hoshiya, M., and Maruyama, 0., " Identification of a Running Load
and Beam System ", Journal of Engineering Mechanics, ASCE, Vol.113,
No.6, June, 1987.
12. Hoshiya, M., " Application of the Extended Kalman Filter-WGI
Method in Dynamic System Identification ", Stochastic Structural
Dynamics-Progress in Theory and Applications, Elsevier Applied
Science, pp.l03-124, 1988.
13. Katayama, T., Yamazaki, F., Nagata, S., Lu, L. and Turker, T,
"Development of Strong Motion Database for the Chiba Seismometer
Array", Report of Earthquake Disaster Mitigation Engineering
Laboratory Report No.90-1 (14), Institute of Industrial Science
University of Tokyo.
THE EFFECT OF A NON-LINEAR WAVE FORCE MODEL
ON THE RELIABILITY OF A JACK-UP PLATFORM

J. Juncher Jensen, Henrik O. Madsen & P. Terndrup Pedersen


The Technical University of Denmark
DK-2800 Lyngby, Denmark

Abstract
The reliability against failure due to excessive stresses in the legs of a jack-up drilling
rig is investigated.
A 3-D linear Timoshenko beam model of the jack-up is used to determine a non-linear
relation between the wave height and the maximum leg stresses. The non-linearities
follow from the use of Morison's formula, Stoke's fifth order wave theory and integration
of wave forces to the actual water surface. Loads due to current, wind and gravity are
included, partly through probabilistic models. Uncertainties in structural parameters such
as the drag coefficient and the yield stress of the leg material are taken into account.
With this model, the probability of failure due to excessive leg stresses is determined
using first and second order reliability methods. These results are compared to the safety
factors according to established codes of practice for allowable stresses. A similar analysis
is given for the safety against overturning. It is found that the established practice for
site approval of jack-up platforms for these two failure modes leads to probabilities of
failure which differ by orders of magnitude.

1. Introduction
Self-€levating mobile offshore drilling units (jack-ups) are being used for water depth up
to about 100 m. The current trend is to design jack-up platforms for operation in
deeper, more exposed areas such as for instance in the northern part of the North Sea.
A jack-up platform is designed for a certain set of environmental conditions but in
many cases these conditions are incompatible with those found on the specific location
where the platform is scheduled to operate. Therefore, a site-specific assessment of the
jack-up platform is normally performed. Different national and international requirements
exist today. However, common to all these requirements are that they are based on
deterministic procedures, where the functional loads and environmental parameters are
chosen such that conservative results are to be expected. Based on these load parameters,
a structural response analysis is performed.
The location approval usually includes a check of stress levels in the legs and pinions, a
calculation of the necessary preload capacity and holding power of the jacking mechanism
and a check of the overturning stability.
The acceptance criteria are in general based on Load Resistance Factor Design (LRFD)
principles where the aim is to ensure consistency in the acceptance checks for various
failure modes and uniformity in risk levels from one location assessment to another.
It is the purpose of the present paper to relate the deterministic safety factors obtained
from these standard procedures with corresponding probabilities of failure in order to
investigate whether such uniformity in risk levels indeed exists with the present practice.
The failure modes considered are compressive chord stress failure and failure due to
overturning. In [1] an analysis of the overturning stability was performed using some
simplifying assumptions in connection with a direct integration of the probability of
186

failure. In the present paper the FORM/SORM methods, see for instance [2], are applied
using a statistical description of all pertinent parameters.
The results are presented in a diagram showing for an example platform the relations
between the deterministic safety factors and the corresponding probabilities of failure or
safety indices.
The paper only deals with extreme loadings and no account is given to the possibility of
failure due to fatigue.

2. Design Calculations According to Established Practice


Figure 1 shows a typical jack-up platform. Each of the three legs is constructed as a
lattice structure with a triangular cross section and with a spud can attached at the
lower end. The hull is a box girder structure normally much more rigid than the legs.
The most significant environmental loads on the platform are those due to waves, current
and wind. All these loads vary with time and because of the low stiffness of jack-up
structures, they give rise to additional load effects such as increased leg moments due to
deck sway and dynamic excited inertia forces. In addition, the weight of the hull is so
high that some reduction of the transverse stiffness of the legs occurs (Euler amplifica-
tion). A consistent mathematical model which includes all these effects is here based on
a three dimensional linear beam model of the jack-up platform.

Figure 1. Jack-up platform.


The hull is modelled as a rigid body but the jack houses containing the elevating
systems and upper guides are flexible relative to the hull.
The legs are modelled as equivalent Timoshenko beams. The stiffness properties of these
beams are determined assuming that the braces do not carry any moments. The legs are
187

connected to the hull and the jack houses through hinges at the lower and upper guides
and through rotational and vertical springs modelling the flexibility of the elevating
systems and the jack houses.
The hull weight is applied to the legs at the positions of the elevating systems. The
hydrodynamic loads on the legs are calculated using Morison's equation in connection
with an equivalent leg model, [3], without any allowance for shielding or interference
effects. The elevation, velocity and acceleration profiles for the waves are calculated using
Stoke's 5th order wave theory. The current velocity is scaled to yield a constant mass
flow, [3]' before it is added to the ,wave particle velocity. The only vertical component
of the wave loading considered is that due to buoyancy variation effects.
The wind loading is calculated for leg sections above the hull using the same equivalent
leg model as for the wave loading. The wind load on the hull is determined according
to a projected area approach.
Coherent directions of the waves, current and wind are assumed. The design wave height
for a specific location is normally derived as the most probable largest wave height in a
characteristic, three hours short term sea condition. The associated wave period is deter-
mined from an average relation, [3]. The current velocity is specified with a piece-wise
linear variation between the sea bed and the still water surface. The wind velocity is
defined as the one-minute mean 10 m above the still water surface. Its variation with
height is taken from the Danish Code [6].
For the design calculations conservative functional loads should be chosen, typically
representing mean values less two standard deviations. This applies to the weight of the
hull and the location of its center-of-gravity.
A series of static calculations using the structural and load modelling described above is
performed in order to determine the directions and positions of the Stoke's 5th order
wave which produce the highest stresses or lowest safety factors. In these calculations,
the corrections due to Euler amplification, deck sway and inertia forces are included in
an approximative manner, [3]. However, a comparison with full non-linear stochastic time
simulation calculations, [5], shows reasonable good agreement for the extreme loadings on
an example platform. For instance, the extreme leg reaction obtained by the design pro-
cedure was found in [5] to be within 0.97 to 1.12 of the results from the stochastic time
simulation procedure. This indicates that the design procedure can be expected to yield
reasonable results. Wave crest

Head waves
~--
H&F*"--r ~--­
~::-

Aft

Figure 2. Top view of example jack-up platform.


Often the worst wave direction is head sea, see Figures 1 and 2, and this is also the
case for the example platform to be considered here. From the numerical results ob-
tained, it is found that the maximum compressive chord stress Urn appears in the
forward. chords of the two aft legs, see Figure 2. The value of Urn can be calculated by
188

(1)

where Mo and Mw are the overturning moments due to waves (including current) and
wind, respectively. The leg sectional modulus for the forward chord is denoted by Wand
the leg sectional area by A. The axial leg load is P. The deck sway is denoted by 8
and L is the distance between the aft and forward legs. It should be mentioned that
formula (1) is obtained assuming simply supported boundary conditions at the interface
between the spud cans and the sea bed. Contributions due to dynamic amplification are
included in the moment Mo. Euler amplification is taken into account by reducing the
leg bending stiffness by a factor (I-PIPE)' where P E is the Euler load of the leg.
Finally, it is seen that the last term in formula (1) gives the contribution to the chord
stress from the deck sway 8.
For each of the aft legs, the axial load becomes

P = ~ [1 -~] + mg (2)

where M is the mass of the hull and d is the distance from the center of the aft legs
to the center-of-gravity of the hull, see Figure 2. The mass of the part of the leg
situated above the lower guide is denoted by m and g is the acceleration of gravity.
The wind-induced overturning moment Mw is given as

Mw = 2"1 Pa Cw Aw Lw V2 (3)

where Pa is the mass density of air, Cw is an equivalent drag coefficient and Aw the
projected wind area for wind coming head on. The length Lw is measured vertically from
the spud cans to the center-of- action for the wind forces on the hull. Finally, V is a
characteristic wind velocity.
The overturning moment Mo due to waves and current has previously been found, (1], to
follow quite accurately a cubic polynomium in the wave height H of the Stoke s 5th
order wave
(4)

Here CD is the drag coeffiCient for the individual bracing members and ao, aJ, a2 and a3
are coefficients depending on the current profile and to a minor extend on the ratio
CM/C D . It is noted that the non-linear terms in equation (4) are due to the use of
Stoke's 5th order wave theory, non-linear drag forces and integration of the wave forces
to the instantaneous position of the water surface at each leg. The wave positions used
>

in deriving equation (4) are those producin~ the highest overturning moment Mo.
Dynamic effects are included in equation (4) using a simple one-degree-of-freedom
approach.
The contribution to the stress Urn from the deck sway 8 is small. Therefore, the deck
sway is approximated by

(5)
where the coefficients bo and bl depend on the current profile. Note that the zero
crossing reriod does not enter the equations (4)-(5). This is because a one-to-one
relation, [3], between the wave height and wave period is assumed in the deterministic
analysis leading to these equations. Since the object of the present study is a survival
analysis, where the response almost exclusively depends on wave heights and not wave
periods, this model is reasonable.
189

Substitution of the equations (2)-(5) into equation (1) yields Um = um(H,V) provided the
current profile, the drag coefficient CD and the structural layout are given. For a specific
site, data regarding the design wave height H and wind velocity V are obtained from
hindcasting or other statistical sources.
According to the Danish Offshore Code [4], the design response should be calculated
using a partial load factor of 1.3 on the environmental loads and thus on the over-
turning moments Mo and Mw and on the contribution due to the deck sway. The partial
load factor for the functional load P should be taken to be 1.0. Finally, a drag coeffi-
cient CD = 0.7 must be used.

The design stress u~ is to be compared with the critical compressive design chord stress
u~, calculated taking into account yielding as well as buckling effects. As design yield
stress the 5% yield stress fractile divided by a member resistance factor 1.21 is used,
whereas the modulus of elasticity is to be divided by the factor 1.48. The code require-
ment is u~ ~ u~ for an acceptable design. The ratio

(6)

is denoted the safety factor and should then be greater than or equal to one.
With regard to failure due to overturning or, more precisely, zero soil reaction on the
forward leg, the moment equilibrium criterion
Mg( d-b) + m~(L-b) ~ Mo+ Mw (7)

ensures stability against overturning. Here mt is the mass of one leg.


From the established code of practice, the partial loads factors on the environmental (Mo
and Mw) and functional (M) loads should be taken as 1.3 and 0.5, respectively. With
these design loads, the safety factor

(8)

shall be greater than or equal to one.

3. Probabilistic description.
Although all parameters entering the equations (1)-(5) can be treated as statistical
variables, only the most important ones are considered here. The following parameters are
given a normal distribution: hull mass M, distance to c.o.g. d, wind area Aw and drag
coefficient CD. The critical stress uc, evaluated without partial resistance factors, is taken
to be lognormally distributed.
An exponential distribution is assigned to the one year maximum wind velocity V
squared

P(V2 < x) 1 - exp[- x-o.57V~o]


0.11 v~o (9)

in accordance with the Danish Code [6]. The reference velocity V50 is the wind velocity
with a return period of 50 years.
190

The current profile is assumed. fixed. The wave heights are assumed Rayleigh distributed
conditional on the significant wave height Hs. The distribution function for the largest
wave height H in a sea state is
P(H < x IHs) = exp(-Nexp(-2(x/Hs)2)) (10)
where N is the average number of peaks in the short term sea state considered. Here it
is assumed that the duration of each sea state is three hours.
A Wei bull distribution is used for the significant wave heights Hs such that the dis-
tribution for the maximum significant wave height H8,m during a one year period
becomes

(11 )

where the location and scale parameters hw and p are determined for the actual location
considered and where Nl = 1 year/3hours = 2920.
Finally, a positive correlation between the wind velocity V and the significant wave
height Hs is applied.
Thereby, the probabilistic model is defined and a standard FORM/SORM analysis, [2],
can be performed with the limit state functions
(12)
and
Mg(d-b) + m~(L-o) - (Mo + Mw) ~ 0 (13)

Two types of analysis are performed. In the first fixed values of significant wave heights
Hs and wind velocity V are used. For each significant wave height the probabilities of
failure and corresponding safety indices are compared to the safety factors (6) and (8).
In the second analysis, a specific site is chosen for a one year operation.
In addition to the comparison between probabilities of failure and safety factors also the
sensitivities of the safety indices to the parameters entering the equations (1)-(11) are
given.

4. Numerical Example.
The pertinent structural parameters for the jack-up platform considered are W =
0.906 m3, A = 0.293 m2 and L = 39 m. The coefficients in equation (4) for the
overturning moment Mo for this jack-up platform are, [1], ao = 11.0 MNm, al = 13.1
MNm/m, a2 = 0.671 MNm/m2 and a3 = 0.0459 MNm/m 3 whereas the coefficients in the
deck sway equation (5) become bo = 0.21 m, b1 = 0.045 m/m and b2 = 0.002m/MNm.
In deriving these coefficients a current profile with a surface current of 0.8 mls and
bottom current of 0.2 m/s has been used. The water depth is 90 m. For more details,
see [1].
The functional loads are given by a deterministic lightweight of the hull equal to 5200 t,
a variable load of 1800 t and a mass of the leg structure above lower guide m = 24 t.
The total mass me of one leg is 250 t. For the limit state analysis concerning chord
stress failure the full variable load is applied. The mean value of the hull mass is then
M = 7000 t and the standard deviation is taken to be 300 t. For the overturning
analysis, it is conservatively assumed, that only half the variable load is present, thus
M = 6100 t with a standard deviation of 150 t. The mean value of the center-of-
gravity distance d = 11 m and the standard deviation is taken to be 1 m.
191

The wind loading on the hull is given by equation (3) with CwAw = 1200 m2, and
standard deviation 60 m2. The arm Lw of the wind moment is taken as fixed, Lw = 113
m. The 50-year maximum wind velocity is specified to 42 m/s.
For the drag coefficient CD measurements presented in [7] yield a mean value of 0.61
with a coefficient-of-variation of 24%. This result will be used here.
The number N of wave heights in a three hours sea state is taken as
N = 3 hours I (4.5 sec + 0.5 sec/m· Hs)
corresponding to numbers slightly higher than 1000.
For the example jack-up, the critical compressive chord stress (Jc is rather close to the
yield stress of the chord material due to a short bay length. With a specified yield
stress (5% fractile) of 500 MN/m2, the mean value of the critical stress becomes 500
MN/m2, with a standard deviation of 25 MN/m2 according to [41. Local bending stresses
in the chords from contact forces at the lower guides are small (or the example platform
because of a very stiff elevating system.
Figure 3 shows the results for a three hours exposure to long-crested, short term sea
states as function of the significant wave height. A constant wind velocity of V=42 m/s
is used for all sea states in Figure 3.

P.S

---------
So ~----

o~--------~----------~--------~
5 6 7 Hs< m) 8
Figure 3 Safety factors (S) and corresponding ,8-values and probabilities of failure (Pf)
for three hours operation in stationary sea states (Hs), V = 42 m/s.
192

The deterministic safety factors based on established practice, 8 (J and 8 "/' are given by
the equations (6) and (8). For the present jack-up platform the design wave height is
13.4 m corresponding to a significant wave height Hs = 7.0 m. It is seen, that the
criteria for both overturning stability and chord stress failure are nearly satisfied (8,,/ =
0.96 and 8 (J = 0.99) for Hs = 7.0 m. Thus, in the deterministic design based on the
codes applied here, the safety against failure in these two modes are nearly the same.
The probabilistic analysis gives quite different results. In complete agreement with the
findings in [1], the probability of failure due to overturning is relatively high, about 10-2 ,
for the design sea state. Contrary to that, the probability of compressive chord stress
failure is very low. For Hs=7.0 m this probability of failure is Pf = 1.4.10- 4 with a
corresponding safety index f3 = _¢>-l(Pf) = 3.62. The results shown in Figure 3 are from
the 80RM analysis but only small differences were found when compared to the FORM
results.
The explanation for this difference between probability of failure levels for the two failure
modes is probably to be found in the different design philosophies applied when deriving
the safety factors 8 and 8 . The safety factor 8 for the stress levels, equation (6),
(J "/ (J
has its basis in civil engineering codes, where there is a tradition for a high degree of
structural safety. This is reflected in the high partial load and resistance factors used.
On the other hand, the safety factor 8,,/ against overturning, equation (8), is unique for
jack-up platforms and does not contain any partial resistance factor. The reason for not
choosing a partial resistance factor is probably that the limit state criterion (7) is
somewhat conservative as no account is given to support excentricity at the sea bottom.
Due to the large diameters of the spud cans, the center-of-action of the vertical reac-
tions from the sea bottom will be situated aft the leg center for head waves. Assuming
a triangular support pressure on the spud can, this offset e will for the example plat-
form be 2 m. In order to evaluate the effect of this offset on the probability of failure,
the FORM/80RM analysis is repeated using the limit state functions (12) and (13) with
the desk sway {j replaced by {}-e. The offset e is taken as normal distributed with mean
value 2 m and standard deviation 1 m. The result is included in Figure 3 and it is
seen, that the inclusion of the offset lowers the probability of failure significantly.
However, the difference between the probabilities of failure for the two failure modes are
nearly unaltered and still about two orders of magnitude.
Finally, a one-year operation in the Baltic 8ea is considered. Based on wave statistics,
[8], the parameters hw and p in the Wei bull distribution, equation (ll), are determined
to be hw = 1.2 m and p = 1.29.

A variation of the wave direction ± 30° from head sea only reduces the chord stresses
by 5%. Therefore, long-{;rested waves are assumed in the present study. In [1], it is
found that the probability of failure due to overturning is reduced by a factor of about
2.5 when short-{;rested waves are used.
The most probable largest wave height in the Baltic 8ea in one year is 13.6 m, which
is very close to the design wave height 13.4 m. The safety factor against compressive
chord stress failure, equation (6), is 8(J = 0.98, whereas the safety against overturning
failure, equation (8), becomes 8,,/ = 0.95.

From the FORM/80RM analysis the following results are obtained for the four cases
considered:
A.l Chord stress failure (e 0): Pf 6.4 '10-4, f3 3.22
A.2 Chord stress failure (e 2m, C.O.V.= 0.5): Pf 1.1·10-4, f3 3.69
193

B.I Overturning (e 0): Pf 9.5.10- 3 , /3 2.34


B.2 Overturning (e 2m, C.O.V. = 0.5): Pf 3.0.10-3 , /3 2.75

The same difference between the probabilities of failure as found in the short term sea
states are observed here.
The importance factors and design points from the FORM analysis for the stochastic
variables are shown in Table 1, whereas Table 2 contains a sensitivity analysis for all
parameters used. The notation A.I, A.2, B.I or B.2 refers to the above-mentioned cases.

Table 1: Importance factors in pct. for the statistical parameters. Baltic Sea, one year
operation. Case A.2: chord stress failure, Case B.2: overturning failure. Leg support offset
assumed.

Importance Design point


factor in pct)
Parameter
A.2 B.2 A.2 B.2
Significant wave height Hs and
62 62 9.36 m 8.35 m
correlated (0.7) wind velocity V 43.0 m/s 40.4 m/s
Wave height H conditioned on Hs 19 17 20.8 m 17.9 m
Drag coefficient CD 12 15 0.79 0.77
Offset distance e 5 5 1.15 m 1.37 m
Critical stress (Jc 2 - 487 MN/m2 -
Hull mass M, center-of-gravity
distance d, wind area CwAw < 1 $ 1 - -

From Table 1 it is seen, that the major uncertainty is due to the significant wave
height Hs and that the only two other statistical parameters of importance are the wave
height conditioned on Hs and the drag coefficient CD. All other parameters can be taken
as fixed at their mean values. However, as seen from Table 2, the safety index /3
depends of course on these mean values.
Table 2 can be used to assess the sensitivity of the safety index /3 for all parameters.
The probability of chord stress failure is especially sensitive to changes in the drag
coefficient CD' the critical stress (Jc and the leg sectional modulus W. Minor changes in
the response coefficients 3.0, at, a2, a3, b o, b i and b2 are not important. For the
probability of failure due to overturning the most important parameters are the hull
mass M, the center-of-gravity distance d and the drag coefficient CD. Of course, for
both failure modes a change in the significant wave parameters hw and p are of
significant importance. However, a direct interpretation of the results given in Table 2 is
not easy because p and hw are correlated.
194

Table 2: Sensitivity analysis. Baltic Sea, one year operation.


Case A.2: chord stress failure, support offset
Case B.l: overturning failure, no support offset
Case B.2: overturning failure, support offset
C!tange 1n IJ for a 1U7.
Parameter change in parameter value
A.2 B.l B.2
Hull mass M (mean) -{l.05 0.26 0.24
Distance c.o.g. d (mean) 0.02 0.29 0.24
Wind area CwAw -{l.05 -{l.07 -{l.06
Drag coeff icient CD (mean) -{l.14 -{l.17 -{l.16
Drag coefficient CD (s.d) -{l.04 -{l.04 -{l.04
Critical stress Uc (mean) 0.2S - -
Critical stress Uc (s.d) -{l.01 - -
Sectional modulus W 0.20 - -
Wave response coefficient ao 0 0 0
Wave response coefficient at -{l.05 -{l.07 -{l.07
Wave response coefficient a2 -{l.05 -{l.06 -{l.06
Wave response coefficient a3 -{l.OS -{l.07 -{l.07
Length L 0.01 0.02 0.02
Mass of leg above L.G. 0 0.02 0.02
Cross sectional area Aw O.OS - -
Deck s"'ay coefficient bo 0 -{l.Ot 0
Deck s"'ay coeff icient b l -{l.02 -{l.02 -{l.02
Deck s",ay coefficient b2 -{l.Ot -{l.01 -{l.Ot
Height Lw for ",ind force -{l.05 -{l.07 -{l.06
50 years ",ind velocity Vso -{l.O9 -{l.13 -{l.12
Offset distance e (mean) 0.05 - 0.04
Offset distance e (s.d) -{l.O2 - -{l.01
Location parameter hw -{l.40 -{l.43 -{l.41
Scale parameter p 0.S2 O.Sl O.SO

5. Conclusions
For a typical jack-up platform probabilities of failure have been calculated by the
FORM/SORM methods. The failure modes considered are collapse of a chord in com-
pression and overturning of the platform.
The structural modelling takes into account non-linear effects due to the use of a
Stoke's 5th order wave theory, Morison's equation and integration of the wave loads to
the instantaneous wave surface. In addition, the destabilizing effect of the hull weight on
the legs, the deck sway and the dynamically excited inertia forces are taken into
account.
195

A sensitivity analysis shows that the uncertaincy in the wave height estimation is the
main parameter in the calculation of the probability of failure. The only other uncertain
parameter of any importance is the drag coefficient. The remaining parameters can be
taken as deterministic quantities.
The main finding of the study is that for conditions with equal factors of safety accord-
ing to the established design practice the probability of failure due to overturning is
considerably higher than the probability of chord failure due to high stresses.
The actual figures for the probabilities of failure can be discussed. For example the
selection of a standard 2-D wave theory may be a conservative element. However, since
leg stress response and overturning moment depend on the load model in the same
manner, this model uncertainty cannot alter the overall conclusion that the present codes
of practice do not ensure a reasonable uniformity of risk level for different failure modes.

Acknowledgement
The research work presented in this paper was partially sponsored by The Danish Tech-
nical Research Council (Grant no. 5.26.09.06) and by the Vetlesen Foundation. The
reliability analysis program PROBAN was used for the reliability calculations. Valuable
discussions with Professor A.E. Mansour during the first authors visit to Berkeley is very
appreciated.

References
[1] Jensen, J.J., Mansour, A.E. and Pedersen, P.T.: "Reliability of Jack-Up Platform Against
Overturning", DCAMM-report No. 399, November 1989, Lyngby, Denmark, (Submitted to J. of
Marine Structures).

[2] Madsen, H.O., Krenk, S. and Lind, N.C.: "Methods of Structural Safety", Prentice Hall Inc.,
Englewood Cliffs, New Jersey, 1986.

[3] Odland, J.: "Response and Strength Analysis of Jack-Up Platforms", Norwegian Maritime
Research, No.4, pp. 2-25, 1982.

[4] Dansk Ingeni"rforening's Code of Practice for Pile-Supported Offshore Steel Structures, DS 449,
Translation Edition, September 1984, DIF-ref. No. NP-162-T, Teknisk Forlag, Copenhagen,
Denmark.

[5] Kje"y, H., B"e, N.G. and Hysing, T.: "Extreme Response Analysis of Jack-Up Platforms",
Proc. Second Int. Conf. on The Jack-Up Drilling Platform, Ocean Eng. Research Center, Dep.
of Civil Engineering, London, September 1989.

[6] Dansk Ingeni"rforening's Code of Practice for sikkerhedsbestemmelser for konstruktioner (Safety
of Structures), DS 410, Danish Edition, June 1982, DIF-ref. No. NP-157-N, Teknisk FOrlag,
Copenhagen, Denmark.

[7] Kim, Y.K. and Hibbard, H.C.: "Analysis of Simultaneous Wave Force and Water Particle
Velocity Measurements", Proc. Offshore Technology Conference, OTC paper No. 2192, 1975.

[8] Hogben, N., Dacunha, N.M.C. and Olliver, G.F.: "Global Wave Statistics", British Maritime
Technology Ltd., UK, 1986.
OPTIMUM CABLE TENSION ADJUSTMENT
USING FUZZY REGRESSION ANALYSIS

Masakatsu Kaneyoshi*, Hiroshi Tanaka*, Masahiro Kamei** & Hitoshi Furuta***


*Steel Structure Design Dept., Hitachi Zosen Corp.
Sakurajima 1-3-40, Konohana-ku, Osaka 554, Japan
**Public Works Bureau, Osaka Municipal Office
Umeda l-chome 2-2-500, Kita-ku, Osaka 530, Japan
***Dept. of Civil Engineering, Kyoto University
Yoshida Honmachi, Saky<rku, Kyoto 606, Japan

1. INTRODUCTION
To determine the optimum cable pre-stresses (Le., prestress) in the design of cable-stayed bridges is
one of the most important, but time consuming, procedures. Various kinds of errors will be introduced during
construction. Therefore, cable length adjustment is necessary to alter the stress distribution and the
geometrical configuration of the bridge. The authors have developed new methods to overcome these
problems through the use of the fuzzy set theory. First, a method is formulated for obtaining optimum cable
prestress. Secondly, a new system identification method is exploited by applying fuzzy regression analysis.
Finally, a method is formulated to adjust the cable length by shim plates. The results of numerical examples
show that the proposed methods are not only simple to handle, but also very practical for the design and
construction of cable-stayed bridges.
The flow-diagrams in Figs. 1 and 2 show the applications of fuzzy set theory to the design and erection
of suspended structures (Le., cable-stayed bridges, suspension bridges, etc.)
198

Fuzz), PrNU",
"-.hod IFPS)

Fig. 1 Application of fuzzy set theory for design of suspended bridge

Fig. 2 Application of fuzzy set theory for erection of suspended bridge


199

2. OPTIMUM ADJUSTMENTS OF CABLE TENSION

(1) Fuzzy Pre-stress Method (FPS)


The most prominent characteristic of a cable-stayed bridge is the introduction of cable pre-stresses to
change the static equilibrium in order to reduce its weight. Therefore, many theoretical studies of cable
pre-stress have been conducted during the last few decades. Yamada and Furukawa [3] exploited a
method which is based on the criterion of the least strain energy of the cable-stayed bridge. Their method
is referred to as the "least strain energy method." However, it requires the iterative (time consuming)
examination of member sections.
Fuzzy regression analysis can be applied to the problem, and it is called Fuzzy Pre-stress Method
(FPS). The algorithm of the FPS is quite similar to both "Fuzzy System Identification method (FSI)" and
"Fuzzy Shim Adjustment Method (FSA)" which will pe mentioned later.

(2) Fuzzy System Identification Method (FSI)


The authors have applied system identification (SI) method to the construction of cable-stayed bridges
such as Sugahara-Shirokita Bridge in Osaka, Japan [1]. Error factors can be identified and quantified by the
SI method. This permits the prediction of the final construction state for the bridge and also permits cable
tension adjustment, thereby, reducing camber error or member force error. The results of applying the SI
method have been quite successful. However, if field measured data containing measurement error are not
correct, or if the choice of error factors or the assumption of those magnitudes are not proper, the estimation
of the error factors becomes unreliable. In order to overcome these problems, a new system identification
method has been developed, in which measured field data were assumed to be fuzzy data and fuzzy
regression analysis was also applied to the process of system identification. Although the method includes
fuzzy coefficients in the formulation, it can be solved without difficulty using a linear programming algorithm.
The method, called the Fuzzy SI method (FSI), can be thought of as an extension of the conventional SI
method. The FSI is more practical and efficient than the conventional SI method from the standpoint of
numerical computation.

(3) Fuzzy Shim Adjustment Method (FSA)


For the control of camber and cable forces during construction, it is necessary to calculate the thickness
of the shim plate to be added or removed at the end of the cables. Usually, the conventional method [2]
which applies the least squares method, requires the determination of weighting parameters to obtain
pertinent solutions. This leads to an increase in the calculation time, which is undesirable for practical use.
To remove these shortcomings, a method has been formulated, which is developed by applying fuzzy
regression analysis in the almost same algorithm as the FSI. It is noted that Fuzzy Shim Adjustment Method
(FSA) needs no weighting parameter and gives practical solutions taking into account the designers'
intention or judgment for constraint conditions.
200

3. FORMULATION

3.1 Fuzzy Prestress Method (FPS)


The FPS proposed here using fuzzy regression analysis [4] will automatically determine the amount of
optimum prestress in the cables. if the upper or lower bounding values of design goals are given by the
bridge designer. The algorithm is not complicated and is robust in the determination.
Structural member forces after introducing pre-stress. Fo = (Fo. dF). are assumed as follows:
Nt
Fo = Fd + L: 5('; • Ki ............................................................ (1)
i=l

where. Fd is structural member force due to dead load; Xi and Ki are. respectively. fuzzy variable and
member force influence coefficient by unit pre-stress of the cable. The wave symbol(Le .• - ) indicates fuzzy
sets which are specified in terms of membership function as shown in Fig. 3. If the design goals (=;:0) of
bending moment at a certain nodal point is given by the region between 1000 and 1200 tfm. then one can
assume that Fo is 1100 tfm and d F is 100 tfm. Eq. (1) is given by the assumption that the member force Fo
has fuzziness. Fuzzy variable Xi is obtained by solving the following maximum problem of fuzzy regression
analysis. Le .• a linear programming problem [4].

Find ai. q

Nl Ml
maximize J(C\)=L: L:c\·IKjil ......................... (2)
i=l j=l
1.0 ---------------
Nl Nl
subject to Foj:';;;Fdj +Cl-h) L:ct IKjil-Cl-h) dFj+L:aIKji (3)
i=l i=l
Nl Nt
-Foj:';;; -Fdj+(1-h) L:ci I Kji I-Cl-h) dFj - L:aiKji (4)
i=I i=l

j = 1. 2 ............ M1 Ci;:: 0 ......................... (5)

where Fig. 3 Membership

M1 = the number of design goals (e.g .• member force etc.) Function of Xi

N1 = the number of cable members


Kji = influence coetticients of member force u-component) due to unit pre-stress of cable (i)
Foj = central value of design goal of member force U)
.IF j = one half of defference between upper and lower bounds of design goal U)
Ci .ai = parameters of the membership function ~XI (ex)
h = fitness (threshold) parameter in dealing fuzzy data (0 S; h< 1)
Eq. (3) corresponds to the constraint condition on the upper bound of the design goal and Eq. (4)
corresponds to that on the lower bound. If the object function (= Eg. (2)) become maximum. fuzzy output Fo
will approach to the central values of the design goals. Fitness (threshold) parameter (= h) should be
determined according to the following condition; it takes the value cross to zero if input data are reliable and
it takes cross to one if they are not reliable.
201

3.2 Fuzzy System Identification (FSI)


Error factors can be estimated quantitatively by the FSI. If measured data are assumed to be fuzzy data,
the fuzzy regression analysis can be applied.
Let Zbe an error vector, the components of which consist of camber errors and member force errors. Z
can be written as a fuzzy set by a linear superposition of error mode vectors with fuzzy coefficients as
follows:

N2
Z = L; Ai • F i ...................... (6) (Fi: Error mode vector, Ai replaces ai, which was
i~l assumed constant in the conventional SI method)

where the fuzzy coefficient Ai is called the error contribution rate which has to be determined. If fuzzy
regression analysis is applied with the introduction of the threshold parameter h (0:5 h< 1), the error
contribution rates will be given by Eqs. (7)-(10) below. Though the minimum problem is also discussed in
reference [5], only the maximum problem is discussed here for the consistency of methodology. The
algorithm of FSI is as follows:

Find ai. Ci

N2 M2
maximize J (Ci) = L; L; Ci' I Fji I ............................................... (7)
i=l j=l
N2 N2
subject to Z j ;;;; (I - h ) L; Ci I F j i I - (I - h )e; + L; a I F j i ............................. (8)
i=l i=I
N2 N2
-Zj;;;;(1-h)L;ciIFjil-(1-h)ej-L;aiFji ............................. (9)
i=l i=l

j = 1. 2, .......... , M2 Cl;;;; 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. (10)

where
M2 = the number of field measurement items
N2 = the number of error factors
Zj = the components consisting of camber errors and member force errors.
Fj i = j-component of error mode vector Fi
Ci ,ai = parameters of the membership function lllVa)
ej = measurement (Le., fuzzy output) error
h = fitness (threshold) parameter in dealing fuzzy data (0 :5 h < 1»

3.3 Fuzzy Shim Adjustment Method (FSA)


The FSA method proposed in this paper can be used to calculate how many shim plates should be
inserted or removed considering the designer's intention by incorporating the design goals into the
constraint conditions. Then, the time consuming process of determining weighting parameters is not
required.
202

Let Y= (Y • .4Y) be the designers' intended or judged tuning goal. for example. for member forces or
displacements by shim adjustment. A shim plate determination algorithm can be formulated in a way similar
to 3.1. if Si is assumed to be the influence coefficient of cable tension force. camber change. or bearing
action force by the unit shim plate thickness.

Na
y= ~Bi· Si ................................................................ (11)
i=l

Find ai. Ci

Na Ma
maximize J(Ci)=~ ~ci·1 Sji I (12)
i=l j=l

Na Na
subject to Yj2:Cl-h)~ciISji I-Cl-h) dYj+~aiSji ........................ (13)
i=l i=l
Na N3
-Yj 2: Cl-h) ~ Ci I Sji I-Cl-h) dYj - ~ai Sji ........................ (14)
i=l i=l

j = 1, 2, ..........• M3 Ci 2: 0 ............................................ (15)

where
M3 = the number of member force
N3 = the number of cable members
Yj = central value of the tuning of j-component
.4Yj = one half of the difference between upper and lower bounds of the tuning of j-component
Sj i = influence coefficients of j-component of member force or camber due to unit shim adjustment
of the cable (i)
Ci .ai= parameters of the membership function ~Si(a)
h = fitness (threshold) parameter in dealing fuzzy data (0 :s; h < 1)
4. Numerical examples

4.1 Model and dimensions


A numerical example of each method will be demonstrated by the model of the cable-stayed bridge
shown in Fig. 4. The general dimensions and nodal points are shown in Fig. 4; member numbers are shown
in Fig. 5. Design dimensions and dead loads are listed in Table 1 and Table 2. respectively.

Table 1. Design dimensions

Cross Moment Young's


Sectional of
Modulus
Area Inertia
A (m 2 ) I (m 4 ) E (tf/m 2 )
Girder 0.3~0.4 0.25~0.35 2. 1 X 1 0 7
Tower 0.3 0.25 2. 1 X 1 0 7
Cable 0.0055~0.0 12 2.0 X 10 7
203

Table 2. Dead load

w (tf/m)

Girder
7.0
Pavement, etc.

Tower 5.0
Cable 0.05~0.10

Fig. 4 Main dimensions and nodal numbers

201
202
203

W
$~204
205

556
555
554
553 552
501 551

107 207

Fig. 5 Member numbers


204

4.2 Example for Fuzzy Pre-stress Method


The girder moment distribution and cable tension force due to dead load (Le., before introducing
pre·stress) are shown in Fig. 6. Design goals (Table 3) were determined according to the expert knowledge
of the designer as input for the FPS. The results of FPS are shown in Fig. 7. A comparable solution is shown
in Fig. 8, which was derived from the "least strain energy (LSE) method" by Yamada and Furukawa. Both
methods show almost the same smoothness in the configuration of moment distribution. The LSE method
shows a smaller difference between maximum and minimum moment than the FPS. However, it does not
coincide with the designers' common sense that the moment of the supporting point should be smaller than
that of the side span in Fig. 8. The FPS, on the other hand, can change the moment distribution by altering
the design goals. Therefore, the moment of supporting point is smaller than that of the side span as shown
in Fig. 7, and cables' tension are almost the same when diameters are the same (Le., member No.
503-506).
The solutions of the FPS reflect the intention of the designer as demonstrated here by giving design
goals. In other words, the FPS has a control function. Therefore, it is very effective in the design of
steel-concrete cable-stayed bridges where the bending moment must be kept very small at connection of
steel and concrete girders. Moreover, it is easy to add more control functions, for example, those on camber
or on supporting reaction forces.

Table 3. Design goals

Item Member No. Range


Prestress All Cables 150~+ 1 5 Ott
Mo me n t 3 -1350~- 500tt-m
" 9
13
-1600~+1300

+
"
"
16
500~+1200
"
" 17
-1000~+1000
"
"
Cable Tension 501
500~+

550~
500
65 Ott
"
" 502
503
300~ 350 "
" 504
170~ 250 "
" 170~ 250 "
" 505
506
170~ 250 "
" 170~ 250 "
205

Cable Ten8ion Force

Member No. Force(lf)


Tf'M
-5000
501 492.57
502 344.38
-4000 503 195.95
-3000 504 180.07
-2000 505 157.58
506 111.71
-1000 51 1 482.79
o 512 337.05
1000
513 191.57
514 175.54
2000 1671 1578 515 153.71
3000 516 111.71
4000

Fig. 6 Moment diagram before pre-stress

Cable Tension Force

Member No. Force(lf)


Tf·M
-5000 501 614.79
-4000 502 300.00
-3000 503 170.00
504 170.00
-2000 505 193.19
-1000 506 187.95
0
51 1 602.58
512 293.62
1000 513 166.20
2000 1194 514 165.72
3000
515 188.44
516 187.95
4000

Fig. 7 Moment diagram after pre-stress by FPS

Cable Tension Force

Membar No. Forca(lf)

Tf·M 501 599.51


-sooo 502 301.63
-4000 503 170.85
-3000
504 210.96
505 210.05
-2000 506 156.78
-1000 51 1 587.61
512 295.21
0
513 167.03
1000 514 205.65
836 515 204.88
2000
3000
516 156.78
4000

Fig. 8 Moment diagram after pre-stress by LSE


206

4.3 Example for Fuzzy System Identification


This numerical mode contains six kinds of errors (deviations) as given in Table 4. Therefore, camber
and member force errors will be introduced in the mode. Forward analysis was used to obtain an analytical
model with errors. Error factors and their magnitudes (Le., assumed values) are selected as listed in Table
4. The error vector Zcan be obtained from field measurement data for each erection step. The error vector
consists of the camber error, cable tension errors, etc. In this example, we use the camber error and the
cable tension errors by subtracting those of the assumed system from those of the input error system in
Table 5. Measurement errors of cable tension and camber (e.g., girders an towers) were assumed to be 2tf
and 2cm, respectively. Then, ej becomes 2.0 (tf/cm). Error contribution ratesai are given as a range of
values by the FSI as shown in Table 5. On the contrary, the solutionai by the SI method are precise without
a range. Ifai are given with a range, the predicted values at bridge completion also will be with a range. The
best strategy is to prepare for the worst scenario given by the FSI. The optimum solution of the magnitudes
of the shim thickness will be given considering the adjustment of all the unset cables. This way is more
flexible than the conventional method [2], because the latter takes care of only one or two cables at each
adjustment. The prediction for the completion can suggest how much strictness is needed during the early
stage of construction.
Table 4. Error systems

Error Error Factors


mode No. Input error System Assumed error System
Girder weight 5% decrease Girder weight 1 0% increase
in each side span in each side span

2 Girder weight 5% increase Girder weight 10% increase


in center span in center span

3 Moment of inertia 6% increase Moment of inertia 10% increase


in each side span in each side span

4 Moment of inertia 4% decrease Moment of inertia 10% increase


in center span in center span

5 Moment of inertia 0% increase Moment of inertia 10% increase


in pylon in pylon

6 Young's modulus 10% decrease Young's modulus 10% decrease


in all cables in all cables

Table 5. Comparison between FSI and SI

Error Precise
FSI (h=O.5, ej=2.0) SI method
mode No. Estimation
al -0.5 -0.49972 -0.49723
a2 +0.5 0.49995~0.50001 0.50004
a3 +0.6 0.59673~0.59797 0.57924
a4 -0.4 -0. 40173~- 0.39908 -0.40025
as 0.0 -0.00293 -0.00771
a6 +1.0 0.99978~1.00000 0.99994
207

4.4 Example for Fuzzy Shim Adjustment Method


The FSA was applied to the assumed error system with the two cases of constraint conditions in Table
6. The solutions by the FSA are compared with ones by the conventional method (Le., LSM) in Table 7. The
magnitudes of the shim thickness found by these methods are the almost the same (minus sign indicates
removal of the shim plate). As shown in Table 8 (e.g., cable No. 501 of Case 1), some cable tension errors
of the LSM violates the constraint conditions in Table 6. The LSM gives a solution which satisfies the
constraint condition if weighting parameters are chainged continually. However, this procedure is unaccep-
table at an erection site, usually late at night. The FSA does not need an iterative procedure like that of the
LSM and can give solutions which reflect the designer's intention in a short time only requiring the design
goals to be given in the constraint conditions for the linear programming algorithm.

Table 6. Constraint conditions for shim adjustment


Constraint condition Case 1 Case 2
Thickness of shim plate (mm) -60~+60 -35~+35

Cable tension error (tf) -15~+15 -20~+20

X-direction displacement of tower (m) -0.100~+O.100 -0.100~+O.100

Y -direction displacement of girder (m) -0. 350~ +0.350 -0.300~+0.300

Table 7. Results of shim thickness


unit(mm)
Case 1 Case 2
Cable No.
FSA LSM FSA LSM
501 -47.94 -51.37 -35.00 -33.33
502 -35.45 -40.62 -24.17 -23.81
503 -10.15 -17.81 - 0.45 - 2.40
504 12.31 3.41 14.27 16.04
505 26.37 20.21 27.65 29.47
506 32.78 26.89 33.60 32.29
51 1 17.08 16.91 25.73 30.12
512 - 8.38 - 7.49 - 0.79 6.54
513 -16,76 -14.29 - 7.25 0.12
514 -29.33 -23.95 -22.58 -11.35
515 -35.63 -27.15 -35.00 -17.98
516 -34.17 -23.99 -32.56 -19.38

Table 8. Residual cable tension force after tension adjustment


unit(tf)
Case 1 Case 2
Cable No.
FSA LSM FSA LSM
501 14.17 16.42 17.82 18.68
502 14.05 14.09 17.08 16.04
503 13.35 11.23 16.18 12.89
504 10.90 7.60 6.22 8.99
505 2.42 4.07 0.00 5.14
506 0.00 1.04 0.00 1.71
51 1 14.47 11.95 17 .14 13.71
512 14.77 13.01 14.90 14.65
513 14.34 13.41 17 .08 14.87
514 12.63 12.61 14.05 13.71
515 8.23 10.26 2.61 10.91
516 0.00 6.47 0.00 6.68
208

5. Conclusion
1) The algorithms of FPS, FSI and FSA are simple to implement; therefore, one can reach solutions, which
are intended or justified by structural designers, in a short time.
2) Determination of pre-stress and shim adjustment were made shorter and improved by the FPS and
FSA, respectively.
3) An extensive estimation of the erection state of the bridge can be derived from the FSI; therefore, more
accurate erection management is possible.
4) All of these methods are reduced to linear programming problems. Therefore, existing subroutine
packages are available and can be used readily for system integration.
5) Since the designer's intention can be formulated in the form of design goals and design tuning, they are
effectively included in the cable adjustment for complicated structures such as steel-concrete cable-
stayed bridges and partially anchored cable-stayed bridges.

The practical application of these methods has been planned and they will be improved more by
experience.

REFERENCES
[1] Tanaka, H., Kamei, M. and Kaneyoshi, M.: Cable Tension Adjustment by Structural System Identifica-
tion, CABRIDGE, Bangkok, Nov., 1987
[2] Fujisawa, N. and Tomo, H.: Computer-aided Cable Adjustment of Stayed Bridges, IABSE Proceedings
P-92/85, pp. 185-190, 1985
[3] Yamada, Y., Furukawa, K., Egusa, T. and Inoue, K.: Studies on Optimization of Cable Pre-stresses for
Cable-stayed Bridges, Proceedings, JSCE, Vol. 356/1-3, pp. 415-423, 1985
[4] Terano, T., Asano, K. and Sugeno, M.: Introduction to fuzzy system, Ohmu-sha, pp. 67-81, 1987 (in
Japanese).
[5] Kamei, M., Furuta, H., Kaneyoshi M. and Tanaka, H.: Cable Tension Adjustment by Fuzzy Structural
System Identification, Proc. 44th Annual Meeting of JSCE, Part 1-160, Oct., 1989 (in Japanese)

APPENDIX
Notation
Ai = fuzzy coefficient of linear equation
Ci scatter of membership function of Ai,Si and Xi (Fig. 3)
ej measurement (Le., fuzzy output) error
Fd structural member force vector due to dead load
Fi = error mode vector
F ii = j-component of error mode vector Fi
209

Fo = central vector of member force


F 0 = design goal of structural member force after introducing pre-stress

F 0 i = central value of design goal of member force 0)


d F = one half of difference vector between upper and lower bounds of design goal
d Fi = one half of difference between upper and lower bounds of design goal 0)
h = fitness (threshold) parameter in dealing fuzzy data (0 :5 h < 1)
Ki = member force-influence vector due to unit pre-stress of cable
Kii = influence coefficient of member force 0) due to unit pre-stress of cable (i)
Ml number of member forces
M2 number of field measurement items (Le., member forces and/or displacements)
M3 number of tuning items (Le., member forces and/or displacements)
Nl number of cable members
N2 number of error factors
N3 number of cable members
Sj i influence coefficient of force or camber 0) due to unit shim adjustment of the cable (i)
Xi a fuzzy membership function with C i and ai (Fig. 3)
y = designers' intended or judged tuning goal vector for member forces and displacements by
shim adjustment
Yj = central value of the tuning of j-component
4Yj = one half of the difference between upper and lower bounds of the tuning of j-component
Z = error vector, the components of which consist of camber errors and member force errors
observed by field measurement
Zj = j-component of member force error or camber error
a i = central value of membership function
BAYESIAN ANALYSIS OF MODEL UNCERTAINTY
IN STRUCTURAL RELIABILITY

Armen Der Kiureghian


University of California at Berkeley
Berkeley, CA 94720, U.S.A.

Abstract
A Bayesian approach for assessing model uncertainty and including its effect in structural reli-
ability analysis is presented. Model uncertainties due to formulation inexactness, measurement
error and insufficient data are included. Simple formulas are derived that directly show the effect
of model uncertainty on the reliability index.

Introduction
An important source of uncertainty in structural reliability is the uncertainty inherent in the
mathematical models employed to describe the behavior or the limiting state of a structure. An
expression describing the capacity of a member or the failure mechanism of a structural system is
such a model. In most engineering applications, mathematical models are idealizations and
describe the reality only within an unknown approximation. For a meaningful evaluation of struc-
tural safety, therefore, it is essential that the uncertainty associated with each employed model be
assessed and incorporated in the reliability analysis. In this paper, a formulation based on the
Bayesian statistical approach is proposed for this purpose.
In the context of this paper, a "model" is a set of one or more mathematical expressions relat-
ing a set of measurable or observable quantities x = (x 1> X 2,
• . . ,xn), denoted herein as variables.

To formulate these expressions, one normally introduces a set of constants 8 = (81)8 2, ••• ),

denoted model parameters, which stand for the inherent properties of nature or of the structure
under consideration. These parameters mayor may not have physical meaning and usually are not
observable. Some might be known constants (e.g., the gravity acceleration), while others are unk-
nown and must be estimated in the process of developing the model. Our concern here is with the
unknown parameters.
Before discussing general forms of the model, we consider two examples. The first is a well
known model describing the flexural capacity, Mil, of a singly-reinforced, under-balance, concrete
beam of rectangular cross section (Park and Paulay 1975)

Mil = ell A,/, (d -11 AIle)


bl, (1)

In this model, the cross-sectional area of the reinforcing bar, A., the yield stress of the bar, I" the
nominal compressive strength of concrete, Ie' and the beam dimensions, b and d, are the model
212

variables, whereas <I> and 'YJ are the model parameters. The second example concerns the liquefac-
tion failure of saturated sandy soils during cyclic ground motions generated by earthquakes.
Motivated by the work of Seed et al. (1985) and Liao et al. (1988), the limit-state surface, describ-
ing the boundary between liquefaction and no liquefaction events, may be modeled by
8 (r, N ,8) '" exp[ - (8 1 + 82 lor + 83 N )] -1 = 0 (2)

with 8 ~ 0 denoting the liquefaction event and g > 0 denoting the no liquefaction event, in which
the normalized cyclic stress ratio, r, and the standard penetration resistance, N, are the model vari-
ables, and 8t> 82 are 8 3 are the model parameters. The above two examples, both representing
single-equation models, correspond to two fundamentally different situations in model estimation,
and this difference will be elaborated on later in this paper.
In the most general case, an m -equation model is described by a set of m equations of the
form
g;(x, 8) =0 i=1,2,"',m (3)

where x is the vector of model variables and 8 is the vector of model parameters. The functions
8/(.) may have algebraic forms, such as in Eqs. 1 and 2. More generally, however, they may have
differential or integral forms. The above form is known as a structural nwdel (Bard 1974). The
second example above is of this form. Often the primary purpose of a model is to predict the
values of a subset of (dependent) variables, y, for future observations of the remaining (indepen-
dent) variables, x. To show this, we rewrite the model in Eq. 3 in the form
g;(x,y,8) = 0 i=1,2,···,m (4)

Obviously, the dimension of y should not be greater than the number of equations, m. If Eqs. 4
can be solved for y (when the dimension of y is equal to m ), the model can be written in the more
convenient form

YI = g/(x,8) i = 1,2, ... ,m (5)

This is known as the reduced model (Bard 1974). The first example above is of this form.
The functions gl (.) are usually prescribed by the engineer or scientist in accordance with laws
governing physical processes relevant to the problem under consideration. Equation 1, for example,
satisfies the laws of equilibrium for the forces acting perpendicular to the cross section of the beam.
In some instances, however, the form of the model is selected without physical reasoning and only
on the basis of intuition or prior experience. This is the case for the example described by Eq. 2.
This model was selected by Liao et al. (1988) to simulate a curve judgmentally drawn by Seed et
al. (1985) through observed liquefaction data. In either case, the development of the model nor-
mally requires idealization, either for reasons of convenience or because the underlying processes
are not well understood. For example, the idealism in the model in Eq. 1 is partly for conveni-
ence, since more refined models of the flexural capacity employing the complete stress-strain
diagrams of concrete and reinforcing bars are available (e.g., see Park and Paulay 1975). On the
other hand, the idealism in the model in Eq. 2 for liquefaction is largely due to our lack of under-
standing of the liquefaction phenomenon.
213

The main issue in model development is the estimation of the parameters II based on a meas-
ured set of values Xl (and Yl for the reduced model), k = 1,2, ... , n, of the variables which may be
obtained from experiments or observations in the nature. We call this model identification. This is
an old problem and an extensive body of literature on this subject is available (e.g., see Bard
1974). In particular, the widely used methods of least squares and maximum likelihood are aimed
at estimating best-fit values of II that minimize an error measure and maximize a likelihood meas-
ure, respectively. Methods developed in the field of system identification (e.g., Beck 1989), where
the parameters defining the model of a system are estimated based on the observed response to
known excitations of the system, are also directly relevant. In the context of structural reliability,
however, the least-square and maximum-likelihood methods are not adequate since they do not
provide measures of the model uncertainty. The Bayesian approach presented in this paper, on the
other hand, provides a convenient and logical means for assessing model uncertainty and incor-
porating its effect in reliability analysis. While this approach has been used in other fields for some
time (see Bard 1974), to the author's knowledge, this paper is the first to use it in the context of
structural reliability analysis.
The topic of model uncertainty has received limited attention in the field of structural reliabil-
ity. The only significant work is that of Ditlevsen (1982, 1988), where a framework in the context
of second-moment reliability analysis was developed. His method essentially amounts to modifying
the second-moment properties of the basic random variables through a transformation, the elements
of which are judgmentally determined. The approach is useful for probabilistic code development
and, in fact, is formulated to justify the recommended practice in several existing codes. However,
it does not make use of statistical data to assess the model uncertainty and as such lacks objectivity.
Furthermore, it is not clear how the method can be used in more advanced reliability methods
employing distributional information.

Sources of Model Uncertainty


Uncertainty in models such as in Eqs. 1-5 may arise from a variety of sources. The first is the
inexactness of the employed mathematical formulation, which itself may arise from two different
sources. One is that the formulation might be missing certain variables that have influence on the
interplay between the considered variables. This could be due to our lack of knowledge of these
missing variables, or due to our desire to exclude them from the formulation for the sake of simpli-
city. The corresponding errors are called errors of ignorance and errors of simplification, respec-
tively. For example, in the liquefaction phenomenon, the grain size distribution most probably has
an influence on the liquefaction potential, but it is not included in Eq. 2. The second possible
source of model inexactness is that the assumed functional form of the model may not be the
correct one. For example, we often employ linear models to describe phenomena that are
inherently nonlinear. Again, this could be either due to ignorance or for simplicity.
In many applications, the measured values of the variables in the data set do not represent
their true values because of errors inherent in the measuring procedure. In this paper, we use the
notation Xl (and Yl) to denote the (unknown) true values and Xl (and Yl) to denote the measured
values. Obviously, a model identified on the basis of such measured data will contain an unknown
214

error, even if it were based on an exact fonnulation. This is denoted herein as model uncertainty
due to measurement error.
The third source of model uncertainty lies in the statistical estimation of the parameters from
limited data when uncertainties due to model inexactness or measurement error are present. If
these latter sources of uncertainty are not present, then a sample of size equal to the dimension of a
is necessary to find the solutions for a, provided they exist and are unique. In the presence of
model inexactness or measurement error, however, no amount of data can provide exact solutions
for a and the true values of these parameters remain unknown. Furthennore, it is often the case in
engineering that the sample size of observations is small, and this leads to further uncertainty in the
estimation of the parameters.
Consistent with the Bayesian notion of probability (Lindley 1985), we express our uncertainty
in the parameters a in terms of a probability density function (PDF), 1.(0). All three broad
sources of model uncertainty (i.e, fonnulation inexactness, measurement error and limited sample
size) contribute to this probability distribution. Thus, this distribution captures the essence of
model uncertainty. In the following section, we discuss how this distribution is determined by the
well known Bayesian updating rule.

Bayesian Parameter Estimation


Let I ' .( a) denote a prior distribution on a, reflecting our infonnation on the model parameters
prior to making experiments or observations. This might be based on engineering judgment, physi-
cal requirements (e.g., the requirement that a parameter be bounded), or a prior identification of
the model. The Bayesian updating fonnula is given by (Lindley 1985)
1.(0) = cL(O)!,o(O) (6)

in which L (a) is the likelihood function, c = [I L (a)!' .(0) dOj-l is a nonnalizing factor, and 18(0)
is the updated posterior distribution of a. For a given set of observations, the likelihood function is
proportional to the conditional probability of making the observations, given the value a of the
parameters. It represents the objective infonnation contained in the observed data. The posterior
distribution incorporates both the prior infonnation (which might be entirely subjective) as well as
the objective infonnation through the likelihood function.
Guidelines for the selection of prior distributions, including the case where no prior infonna-
tion is available, are discussed by Jeffreys (1961) and Box and Tiao (1973), among others. In the
following, we present fonnulations of the likelihood function for different types of models, uncer-
tainty situations and observations. For simplicity in this presentation, we only consider single-
equation models. The extension to multi-equation models is straightforward.

Likelihood Functions
(a) Exact Reduced Model
Suppose the model
y = g(x,O) (7)
215

is exact, but the measured values Yk' k = 1, ... ,n, of the dependent variable Y are in error. (The
independent variables, x, are assumed to be measured accurately. For the more general case see
(c) below.) Let ek = Y1 -Yk denote the error in the k-th experiment or observation. Substituting in
Eq. 7 and rearranging terms, we obtain
(8)

Let f.( e 11) denote the joint PDF of e = (e 1, . . . ,en)' where 1) denotes the set of distribution
parameters. The required likelihood function then is
(9)

Here we have used the sign - to denote proportionality, since we are not interested in the constant
coefficient of the likelihood function that can be taken care of through the normalizing factor c .
The distribution f .(e 11) depends on the procedure used for measuring Y and its parameters, 1),

can be determined by proper calibration of the measuring devices. Most commonly, errors at suc-
cessive measurements are assumed to be statistically independent and normally distributed with zero
mean (after removal of the systematic component of the error) and a common standard deviation,
a. In that case, Eq. 9 takes the form

L(O,a) -
1 1
--exp - -lr ~
n rYk -g(Xb O)
----
121 (10)
an L 2 k =1 ~ a ) J
If the distribution parameters1) (e.g., a in Eq. 10) are unknown, they must be considered as addi-

tional uncertain parameters to be estimated through the updating rule in Eq. 6. For notational con-
venience, one might include these parameters as a subset of the vector O.
(b) Inexact Reduced Model
Suppose the model in Eq. 7 is inexact such that for the k-th observation a random correction
term 'Yk has to be added to maintain the equality, i.e,

Yk = g(x,O)+'Yk> k=I,2,···,n (11)

The term 'Yk may be representing the influence of the "missing" variables and/or the inexact form of
the model equation. Let f,c'Y 11)') denote the joint PDF of 'Y = ('Y!> .•• ,'Yn), where 1)' are the
distribution parameters. With no measurement error, the likelihood function obviously has the
same form as in Eq. 9 with Yk and 1) replaced by Yk and 1)', respectively, and the subscript e
replaced by 'Y. The distribution parameters, 1)', however, are more difficult to determine in this
case, since this would require calibration with an exact model which may not exist. In many cases,
it is appropriate to assume 'Yk as statistically independent normals with zero means (to generate an
unbiased model) and a common standard deviation, a. The form of the likelihood function in Eq.
10 then applies. On the other hand, if 'Yk are assumed to be correlated normals with zero mean
and covariance matrix l:, the likelihood function becomes

L(O,l:) - (detl:)-1J2 exp l-


(1
2"'Y T l:-1'Y
)
J (12)
216

where 'YT = (Yl-g(xl>9), ... ,Yn -g(xn,9» must be substituted. Again, if l: is unknown, its ele-
ments may be considered as a subset of 9 to be estimated by the updating rule in Eq. 6.
Now suppose there is also error in measuring y. Eqs. 8 and 11 can be combined to read

(13)

The form of the likelihood function obviously is the same as before if we use the joint distribution
of 'Y + e. Often it is not possible to distinguish the two error vectors and one is forced to assign a
distribution to the sum. For example, if the sum 'Y + e is assumed to be normal with zero mean
and covariance matrix l:, the likelihood function will take the form in Eq. 12 with 'Y replaced by
'Y + e with elements 'Yk + ek = Yk - g (Xl> 9).

(c) Exact Structural Model


Consider the exact structural model
g(x,9) = 0 (14)
and assume the available data consists of measured values Xl of X for which the equality holds.
Such data is denoted herein as limit-surface data. (For the liquefaction model in Eq. 2, such data
can be generated, at least theoretically, by measuring N for each specimen and then applying gra-
dually increasing cyclic stress ratio r until liquefaction occurs. Obviously, for certain applications,
including the liquefaction case, such an experiment may not be easy to conduct.) Suppose the meas-
ured values Xl> k = 1, ... ,n, are in error and let el = Xl - Xl denote the error vector in the k-th
experiment. Defining E = (el> ... ,en) with the PDF iE(E I "I), the likelihood function takes the
form
(15)

which includes the unknown true values Xl' These must satisfy the set of n equations
g(xl>9) = 0, k = 1, ... ,n (16)

Hence, the likelihood function has an implicit dependence on 9. Note that one could at most elim-
inate n of the unknown true values in the likelihood function by solving Eqs. 16. The remaining
variables will have to be treated as uncertain parameters to be estimated by the updating rule in Eq.
6. After the updating is done, one may integrate the posterior distribution over these parameters to
obtain the joint distribution of 9 and "I.
In some applications the available data consists of measured values Xl, k = 1, ... ,n, for
which the signs of g (xl, 9) are observed. We denote such data as failure/no failure data. (For the
liquefaction model, such data can be generated by observing liquefaction or no liquefaction events
for different sets of measured values of Nand r. Obviously, such observations are much easier to
conduct for the liquefaction phenomenon than the observations on the limit surface.) Assuming the
model is exact but the measured values are in error, the likelihood function takes the same form as
in Eq. 15, but the equality constrains in Eq. 16 must be replaced by the inequality constrains
g (Xl' 9) sO, if event g sO (failure) is observed
> 0, if event g > 0 (no failure) is observed (17)
217

In this case none of the unknown parameters Xl can be eliminated from the likelihood function;
however, the inequalities impose bounds on the acceptable range of the parameters.
(d) Inexact Structural Model
Suppose the structural model in Eq. 14 is inexact, such that in the k-th experiment a random
error term 'Yk must be added to maintain the equality, i.e.,

g(Xb8)='Yk' k=l, ···,n (18)

Let f ,('Y I 'IJ') denote the joint PDF of 'Y = ("'(1> ... , 'Yn) and assume there are no measurement
errors. If Xk, k = 1, ... , n, are limit-surface data, the likelihood function takes the form
(19)

However, if Xk are failure/no failure data, the likelihood function takes the form

L(8,'I),)-p[n'Yk~g(Xk,8) n'Yk<g(xk,8)1'IJ'] (20)


kEF kEF

in which P [.] denotes the probability and F and F respectively denote the sets of experiments where
the failure and no-failure events are observed. When 'Yk are statistically independent, the preceding
can be written in the form
(21)

in which F ,J.] and F..,J.] respectively denote the cumulative distribution function of 'Yk and its
complement.
If measurement errors are present, the likelihood function with the limit-surface data becomes

(22)

in which E is the collection of error vectors defined earlier. In this case no restriction applies to the
true values of the variables and the entire set must be treated as uncertain parameters. If the avail-
able data is of failure/no failure kind, the likelihood function becomes

L(8,'I),'I)',X1> ... ,xn) - fE(xl-X1> ... ,xn -Xn I'IJ)


r 1
(23)
xpi n'Yk~g(xb8) n'Yk<g(xb 8 )1'IJ'J
LkEF kEF

Again, if 'Yk are independent, the expression in Eq. 21 in terms of the cumulative distribution func-
tions can be used.
It is seen that the structural model in general may have many more uncertain parameters to be
estimated. For this reason, whenever possible, it is desirable to use the reduced model in Eq. 7,
provided the independent variables can be measured accurately. If this is not possible, simplifying
assumptions may have to be made. This might include the assumption of independence and identi-
cal distribution for the elements of ek and 'Yk for the different experiments. Furthermore, if the
measurement error for a subset of the variables is judged to be small, the corresponding measured
218

values can be considered to represent the true values, thus reducing the number of unknown param-
eters.

Reliability Analysis Including Model Uncertainty


The structural reliability problem usually is formulated in terms of a vector of random vari-
ables x, representing inherent randomnesses in the structure and its load environment, and a limit-
state function g (x) defined in the outcome space x of X in such a way that g (x) > 0, g (x) ~ 0, and
g (x) = 0 respectively denote the safe set, the failure set, and the limit-state surface.
In general, the function g (x) can be viewed as a mathematical model idealizing the limit state
of the structure. The function could be a single model by itself (e.g., the liquefaction model in Eq.
2), or it could be composed of several submodels. For example, the limit-state function
g = c(x) -d(x) consists of a "capacity" submodel c = c(x) (e.g., the flexural capacity model in Eq.
1) and a "demand" submodel d = d(x). The important point is that each of these models may con-
tain uncertainties, which in tum introduces uncertainty in the limit-state function. Based on our
earlier discussion, these uncertainties can be represented in terms of a vector of uncertain parame-
ters 9, and hence we write the limit-state function as g (x, 0).
In most applications parameters defining the distribution of X are also uncertain. The source
of this uncertainty usually is statistical, i.e., it arises from the limited nature of the size of data sam-
ples, although measurement errors may also contribute. A detailed discussion of this subject is
beyond the scope of this paper. It suffices to say that these uncertainties can also be represented in
tenns of a vector of uncertain parameters (Der Kiureghian 1989). For mathematical convenience,
we allow the vector 0 to also include these uncertain distribution parameters and write the PDF of
X as f x(x I 0). From here on, we will refer to model and distribution parameter uncertainties col-
lectively as parameter uncertainties.
It is important to note that the nature of the uncertainties represented by the random variables
X and the parameters 0 are fundamentally different. Specifically, the uncertainties in X, which
represent inherent randomnesses, cannot be influenced without changing the physical characteristics
of the problem (e.g., changing the concrete mix to reduce the variability of f c), whereas the uncer-
tainties in 0 can be influenced by the use of alternative models and collection of additional data.
This fundamental difference may require a separate analysis of these uncertainties and an evalua-
tion of their respective influences on structural reliability.
For a given value 0 of the parameters, the conditional probability of failure is

Pf(O) = J fx(x 10)dx (24)


g(.,8):S 0

The corresponding generalized reliability index is defined by

13(0) = - <II-1[Pf (0) 1 (25)


in which <11(.) denotes the standard normal cumulative probability. For uncertain 0, clearly Pf(O)
and 13(0) are also uncertain. Methods for computing probability distributions that reflect this uncer-
tainty are described in a recent paper (Der Kiureghian 1989). In the same place, simple expres-
sions for the mean and variance of the conditional reliability index are presented based on mean-
centered, first-order approximations, which are as follows:
219

(26)
In these expressions, Mo and l:oo are the mean vector and covariance matrix of 0, respectively, and
Vo~ denotes the gradient row-vector of ~(O) with respect to the elements of 0 evaluated at the mean
point. Clearly, the standard deviation <Til is a measure of the influence of parameter uncertainties
on the reliability index. It is worth noting that the gradient vector Ve~ is easy to compute in the
first-order reliability method (FORM) as well as certain simulation methods.
In decision applications, one is usually interested in the expected value of the conditional pro-
bability of failure,

PI = E[PI(O)] = IPI(O)fo(O)dO = I Jx(xIO)Jo(O)dxdO (27)


o G(x, 0) S 0

The corresponding reliability index is


~ = _<l>-l[PI ] (28)

These have been defined as predictive measures of safety (Der Kiureghian 1989). The integral in
Eq. 27 can be computed by any reliability analysis technique, e.g., first or second-order reliability
method (FORM and SORM), simulation methods, by simply considering the uncertain parameters
o as a subset of the basic random variables X.
Another approach for computing the above measures is to use a "nested" reliability analysis.
Consider a reliability problem defined by the limit-state function
g = u + ~(O) (29)

in which u is a standard normal variate (zero mean and unit standard deviation), which is indepen-
dent of O. As shown by Wen (1987) in a slightly different formulation, the solution of this prob-
lem is identical to PI' since one can write

1
u
I
+ lI(e) S 0
JuCu)Jo(O)dudO = I
0
Ilr <t>-·[P, (e)]
I
-'"
Ju(u)du JeCO)dO
J
= IpI(O)JeCO)dO
0
(30)

which is identical to the first expression on the right-hand side of Eq. 27. Following this approach,
for each value of 0 one solves the "inside" conditional reliability problem defined by Eq. 24, and
then solves the "outside" reliability problem defined by the limit-state function in Eq. 29. Note that
the "inside" reliability problem will have to be solved for each value of O. Any of the conventional
reliability analysis methods, e.g., FORM, SORM, simulation, may be used for each of these prob-
lems. As one can see, the nested reliability approach solves two smaller reliability problems (one
involving the random variables X alone and another involving 0 and u) in a nested manner, as
opposed to a direct approach involving all the random variables X and 0 at the same time. The
nested approach is useful if one can justify a simplified approach for either the "inside" or the "out-
side" problem than one can justify for the direct approach.
The nested reliability approach offers a simple approximation of the predictive reliability index.
Noting that the reliability index is approximately equal to the ratio of the mean to the standard
deviation of the limit-state function, using Eq. 29 one has
220

~:::: '""; = '""~ (31)


a; (1 + al)1I2

Together with the approximate relations in Eqs. 26, the preceding equation gives a simple fonnula
for detennining the amount by which the "mean" reliability index (computed based on the mean
values of the parameters) decreases in account of the parameter uncertainties. In general, the
approximation will be good, provided the uncertainty in 8 does not dominate the reliability prob-
lem.

Summary and Conclusion


A Bayesian approach for the assessment of model uncertainty and its inclusion in structural
reliability analysis is developed. Model uncertainty is described in terms of a vector of uncertain
model parameters, the distribution of which is obtained by use of the Bayesian updating rule.
Model uncertainties arising from fonnulation inexactness, measurement error and insufficient data
are included in the analysis.
A Bayesian framework for reliability analysis including the effect of (model and distribution)
parameter uncertainties is presented. A simple fonnula for the variance of the reliability index,
reflecting the influence of parameter uncertainties, is derived. Another simple fonnula expresses
the decrease in the reliability index due to parameter uncertainties. These quantities are easy to
compute with the existing reliability analysis methods, such as FORM, SORM and simulation.

Acknowledgment
A part of this work was carried out while the writer was a Visiting Professor of Systems
Analysis of the Mitsubishi Heavy Industry Chair at the Research Center for Advanced Science and
Technology of Tokyo University, Japan. The support provided during this visit is gratefully ack-
nowledged.

References
Bard, Y. (1974). Nonlinear parameter estimation. Academic Press, Inc., Orlando, Florida.
Beck, J.L. (1989). "Statistical system identification of structures." Proc. 5th Int. Conf. Struc. Safety
and Reliab., San Francisco, CA., 2, 1395-1402.
Box, G.E.P. and G.c. Tiao (1973). Bayesian inference in statistical analysis. Addison-Wesley
Pub. Co., Inc., Reading, Mass.
Der Kiureghian, A. (1989). ''Measures of structural safety under imperfect states of knowledge." J.
Struct. Eng., ASCE (in press).
Ditlevsen, O. (1982). ''Model uncertainty in structural reliability." Struc. Safety, 1(1),73-86.
Ditlevsen, o. (1988). ''Uncertainty and structural reliability. Hocus pocus or objective modeling."
Report No. 226, Department of Civil Engineering, Technical University of Denmark, Lyngby
1988.
Jeffreys, H. (1961). Theory of probability, 3rd ed. Oxford University Press, London.
221

liao, S.S.c., D. Veneziano and R. Withman (1988). "Regression models for evaluating liquefac-
tion probability." J. Geotech. Eng., ASCE 115(5), 1119-1140.
lindley, D.V. (1985). Making Decisions, 2nd Ed. John Wiley & Sons, London, U.K.
Park, R. and T. Paulay (1975). Reinforced concrete structures. John Wiley & Sons, New York,
NY.
Seed, H.B., et al. (1985). '1nfluence of SPT procedures in soil liquefaction resistance evalua-
tions." J. Geotech. Eng., ASCE 111(12),1425-1445.
Wen, Y.K., and H.C. Chen (1987). "On fast integration for time variant structural reliability."
Probab. Eng. Mech., 2(3), 156-162.
SIZE EFFECT OF RANDOM FIELD ELEMENTS
ON FINITE-ELEMENT RELIABILITY METHODS

Pei-Ling Liu
Institute of Applied Mechanics
National Taiwan University

1 Introduction

The structural reliability analysis is formulated based on two fundamental assumptions:


(1) the state of the structure is defined in the outcome space of a vector of basic random
variables; (2) the structure can be in one of two states, the safe state or the failure state.
The boundary between the two states in the outcome space is known as the limit-state
surface.

Let the vector V denote the set of basic random variables pertaining to a structure,
and assume the joint probability density function (PDF) Iv (v) is known. The basic
random variables may include parameters defining loads, material properties, structural
geometry, etc ..

Failure criteria of structures are usually defined in terms of the basic random variables,
V, and a load effect vector, S, such as stresses and deformations. Then, V and S are
related through the mechanical transformation

S = S(V) (1)

For all but trivial structures, this transformation is available only in an algorithmic sense.
The finite element reliability methods are reliability methods which use finite element
analysis to compute the load effect vector. In accordance with the failure criteria, one
can formulate a limit-state function such that g(v, s) > 0 defines the safe state, g(v, s) :S
o defines the failure state, and g(v, s) = 0 defines the limit-state surface. Then, the
probability of failure of the structure is

P, = [ Iv (v)dv (2)
ig(v,.)$.o

In reliability analysis, it is convenient to transform the variables V into the standard


normal space through a probability transformation
224

Y = Y(V) (3)

where the elements of Yare statistically independent and have the standard normal den-
sity. Such a transformation is not unique. The selection of an appropriate transformation
is based on the distribution of V [4,8].

Der Kiureghian and Liu [4] suggested a probability transformation which is particu-
larly useful in the finite element reliability methods. In this method, a joint distribution
model, originally introduced by N ataf [12], with prescribed marginal distributions and
correlation matrix was proposed. The joint PDF of V is defined such that the variables
Z = (Zl,"" Zn) obtained from the marginal transformations

Zi = CJi-1[Fy;(Vi)] i = 1,2, ... , n (4)

are jointly normal, where F y ;(.) denotes the marginal distribution of Vi, and CJi(·) denotes
the standard normal cumulative probability. Since Z;'s are joint normal with zero means
and unit standard deviations, it is completely defined by its correlation matrix. The
correlation coefficient P Zi,Zj of Zi and Zj can be expressed in terms of the marginal
distributions and correlation coefficient of Vi and Vj through an integral relation [4]. The
transformation to the standard normal space for the above distribution model, then, is
given by

(5)

in which L z is the lower triangular matrix obtained from the Cholesky decomposition of
the correlation matrix of Z.

In the first- and second-order reliability methods (FORM and SORM), one searches
for the nearest point on the limit-state surface to the origin in the standard normal space
by solving the constrained optimization problem:

minimize yTy v
subJect to G(y) = 0
(6)

where G(y) is the limit-state function in the y space. The optimal solution can be found
by the HL-RF method [6,13], which is based the following recursive formula:

(7)

where y" and yl;+l are the y at the kth and (k + 1)th iterations, respectively, and VG(y)
is the gradient of the limit-state function with respect to y. The optimal point, Y*, is
called the design point, and the minimum distance, denoted {3, is called the reliability
225

index. According to the optimization theory,

{J = [VG(Y)Y] (8)
IVG(Y)I y.

In the first-order reliability method, the limit-state surface in the standard normal
space is replaced by the tangent hyperplane at the design point. The first-order estimate
of the probability of failure, then, is

(9)

In the second-order reliability method, the limit-state surface in the standard normal
space is fitted with a second-order surface, usually a paraboloid [3]. The second-order
estimate of the probability of failure is computed in terms of (J and the curvatures of the
fitting paraboloid.

2 Representation of Random Fields

In finite-element reliability analysis, random fields are often used to describe the uncer-
tainties which possess spatial variability, for example, to describe the Young's modulus of
a plate or the intensity of a distributed load. In most applications, the Gaussian random
field is assumed because of convenience and lack of alternative models. However, the
Gaussian model is not applicable in many situations. For example, it cannot be used
model the Young's modulus of a material, which is always positive.

Grigoriu [5] and Der Kiureghian [1] proposed that the Nataf distribution model be
used to model non-Gaussian fields with prescribed marginal distribution and mean and
autocorrelation functions. Let the random field W(x) have the marginal CDF Fw(w(x)),
where x is the position vector. The random field is completely defined by assuming that
the transformed process

Z(X) = CP-l[Fw(W(x))] (10)

is Gaussian with zero mean, unit variance and autocorrelation coefficient function pzz (Xi, Xj).
For any set of Xi and Xj, PZZ(Xi,Xj) can be calculated in terms of the marginal distribu-
tions and correlation coefficient of W (Xi) and W (Xj) [1,5]. This model is very useful in
the finite element reliability methods.

For use in a FORM/SORM algorithm, it is necessary to represent a random field in


terms of a set of basic random variables. Four types of discretization methods have been
suggested for the representation of random fields in stochastic finite element methods and
finite element reliability methods. They include the spatial averaging method [15], the
226

midpoint method [2,7,16], the interpolation method [11], and series expansion methods
[9,14]. The midpoint method is adopted in this study, since the other methods are strictly
applicable only to Gaussian random fields [10]. In the first three methods the domain of
the field is discretized into a mesh of random field elements (not necessarily coinciding
with the finite element mesh), and the value for each element is described by a single
random variable.

The selection of the random field mesh is an important task in finite element reliabil-
ity analysis. Three factors should be taken into account in the selection of random field
mesh, namely, accuracy, stability, and efficiency. In view of accuracy, the element size is
controlled by the correlation length of the random field. The correlation length of a ran-
dom field is a measure of the distance over which the correlation coefficient approaches
a small value. Hence, smaller random field elements should be used if the correlation
length is large. The second factor is the numerical stability of the transformation to the
standard normal space. If the random field mesh is excessively fine, the discretized ele-
ment variables are highly correlated and their correlation matrix is nearly singular. The
probability transformation then may become numerically unstable. Hence, this factor
provides a lower bound on the element size. The last factor is the efficiency of reliability
analysis. It is obvious that a smaller number of basic variables would require less compu-
tation time. This is especially true when the gradient of the limit-state function is to be
computed by a finite difference scheme. Thus, the elements should be as large as possible
from this viewpoint. Note that the above controlling factors are different from the factor
in the selection of finite element mesh, which is basically governed by the gradient of the
displacement field.

The selection of appropriate random field mesh has been addressed by Der Kiureghian
and Ke [2]. They have suggested that separate meshes be used for the finite elements
and for each of the random fields. In general, it is appropriate to use a finite element
mesh that satisfies the requirements based on the displacement gradient and the corre-
lation lengths of all random fields, and then for each random field choose a mesh that
is coincident with or coarser than the finite element mesh such that the corresponding
probability transformation remains stable. Their suggestions provide a useful guideline
for the selection of random field mesh. However, how the size of random field elements
influences the accuracy and stability of reliability analysis remains unexplored.

There are difficulties in investigating the effect of element size on the stability and
accuracy of reliability analysis. Suppose the random field is described by the N ataf model.
It is natural to apply the probability transformation in Eqs. 4 and 5 to transform the
discretized element variables into the standard normal space. This requires the Cholesky
decomposition of the correlation matrix. The decomposition may break down because of
numerical errors. Since numerical errors are dependent on the solution algorithm and the
precision of real numbers in the code, it is difficult to derive a general guideline for the
lower bound on the element size. Secondly, as mentioned before, the reliability analysis
cannot incorporate random fields. Hence, there is no exact solution that can be used to
check the accuracy of the analysis for a certain random field mesh. However, it is believed
227

that the finer the mesh is, the better the random field is represented. Therefore, error
analysis can proceed by comparing the results of a coarse mesh and a refined mesh.

3 Size Effect of Random Field Elements

Suppose the reliability of the continuum in Fig. 1 under certain loads is investigated. Let
the uncertainties in this problem be modeled by a set of random variables V and a random
field W, where V and Ware statistically independent. Following Der Kiureghian and
Ke's suggestion [2], the domain of the continuum is first discretized into a set of finite
elements. Now consider two discretizations of the random field W (see Fig. 1); both
random field meshes are included in the finite element mesh. The fine mesh contains n
elements, and the coarse mesh contains k super elements. Here super element means that
it is a collection of one or more elements in the fine mesh. In other words, the fine mesh
can be divided into k blocks, and each block is coincident with a super element in the
coarse mesh. Note that the finite element mesh remains the same in both cases. That
is, the mechanical properties of the continuum remain the same despite of the different
discretizations of the random field.

The discretized element variables associated with the coarse and the fine meshes are,
respectively,

(11)

and

(12)

where Wi represents the value of W in the ill> super element of the coarse mesh, and Wi
represents the values of W in the ill> block of the fine mesh.

Let the random field be modeled by the N ataf model. Since V and Ware statistically
independent, the probability transformation for the coarse mesh is

(13)

where z is obtained from w using Eq. 10, and L z is the lower triangular matrix obtained
from the Cholesky decomposition of the correlation matrix of Z, C z . It is easily derived
from Eq. 8 that

(14)
228

where VyV G and V.g are the gradients of the limit-state function with respect to Yv and
z, respectively, and Yv* and z* are the values of Yv and z at the design point, respectively.

--,..--,--
I ,
-
I I
--!---.l---
, I
--,--r--
I I
finite element mesh
I I
I I I I
--r--f---
I I
--,.--,.--
I ,
random field mesh
--,--,--
I I I I
I , __ ..1 __ .1_
I
--1---1---
I ,
--,..-
I I

I I

(a) coarse random field mesh

, I
--f---~--
, ,
, I

I ,
--r--l---
, I
,
, I
I

__ -I- __ .l __
, I

(b) refined random field mesh

Figure 1. The example continuum with (a) coarse


random field mesh; (b) refined random
field mesh
229

Similarly, for the refined mesh

y = { ;: } = { (IS)

and

p= (Hi)

where the bar - denotes the correponding quantity for the refined mesh. Note that the
probability transformation for V remains the same in the latter case.

In general, the continuum and the loading are not the same for the two discretizations
at the design point. To compare (J and p, one in fact has to solve two optimization
problems entirely. In order to avoid solving the optimization problem twice, one can
do the following: First find the design point, Y*, for the coarse discretization. Let the
corresponding design point in the original space be [v*,w*]. Now assume that for the
refined mesh,

(17)

(IS)

It follows that

YV* = YV* (19)

(20)

This assumption is good if the mesh does not change drastically.

For this special realization, the two continua and their loadings are identical. Observe
that the values of the limit-state function and its gradient in the original space are only
dependent on the mechanical properties and loading of the continuum, not the probability
transformation. It follows that
230

V'YV G Iyv.,'" = V'yvOI_Yv*.Z*


_ (21 )

::; Iyv.,z. = L
element iEblock i
a(~Y) ·1
1 J Yv *.i.
(22)

Substituting Eqs. 19 to 22 into Eq. 16, one gets

P=(J (23)

It is seen that except C Z and V'.y all the quantities in the above expression are available
from the coarse mesh solution. In fact, if the gradient is computed analytically, V'IY can
also be obtained using the coarse mesh solution with little extra computation. This is
because in that case V' zg is obtained by computing the gradient of response with respect to
the loading, properties, or nodal coordinates of each finite element and then collecting the
contributions from all the finite elements in a random field element. Since the gradient
with respect to each finite element is readily available from the coarse mesh solution,
one can simply reassemble these quantities according to the refined mesh. On the other
hand, if the gradient is computed by a finite difference scheme, V' jY can be obtained by
a deterministic solution for the gradient. No iteration is required.

A good way to understand the nature of the estimate P is through a geometrical


interpretation. Let y;* denote the assumed design point for the refined mesh, as defined
in Eqs. 19 and 20. According to the assumptions in Eqs. 17 and 18, the two continua and
their loadings are identical. Hence, O(y;*) = G(y*) = o. In other words, y;* must lies
on the limit-state surface. Apparently, one can also use Vy ; *T y;* to estimate p. Since
P is 0 to the origin, V y ; *T y;* 2: p.
v
by definitaion the nearest distance from 0 (y) =
Therefore, y ; *T y;* always provides an upper bound to p. Experience shows that this
upper bound is often far above the true value. On the other hand, using Eq. 23 to
estimate P amounts to replacing the limit-state surface by the tangent hyperplane at y;*
and then using the shortest distance from the hyperplane to the origin to estimate P (see
Fig. 2). Hence, when the limit-state surface is linear and the basic variables are normal,
the estimate P is exact. For the general case, Eq. 23 gives a good approximation if the
limit-state surface is nearly flat near the design point.

Another way of viewing the estimation is through the optimization process. When
searching for the design point for the refined mesh, one can choose an initial point ac-
cording to Eqs. 17 and 18. This is the best choice possible with the information ready to
hand. Suppose the recursive formula in Eq. 7 is applied to search for the design point,
and only one iteration is performed. Since O(y;*) = 0, one gets a reliability index as
estimated by Eq. 23. It is known that the HL-RF method often converges fast, unless
231

the limit-state surface fluctuates rapidly near the design point. This explains why Eq. 23
usually provides a good estimate of fj and is better than JYi *T Yi*.

Recall the three requirements on the selection of random field mesh, namely, accuracy,
stability, and efficiency. If a random field is discretized into a coarse mesh, the second and
the third requirements will be satisfied, but not the first one. With the above approach,
the reliability index for the refined mesh can be estimated using the results of the coarse
mesh solution. Hence, the accuracy requirement can be satisfied as well.

tangent at Yi

Figure 2. Geometrical interpretation of Pestimate

4 Examples

Two examples are examined in this section to check the feasibility of the above estimation.
First consider a fixed-ended beam with stochastic flexural rigidity EI(x). This beam
is subject to a distributed load with stochastic intensity W(x), as shown in Fig. 3.
Both EI(x) and W(x) are modeled as homogeneous Gaussian processes. The mean and
coefficient of variation of E I(x) are 1,125,000 k- It 2 and 0.20, respectively, and the mean
and coefficient of variation ofW(x) are 8.0 kilt and 0.30, respectively. The autocorrelation
coefficient functions are assumed to be of the form
232

PEl EI(~X)
,
= exp ( I~XI)
-- -
0~5L
(24)

pww(~X)
,
= exp ( --I~XI)
-
0.25L
(25)

where ~x is the distance between two points, and L = 32 It is the beam span.

W(x}

tl--=-* (I):-G:tr(_/~/_*_t-1-x
EI(x)

32 ft ~I

Figure 3. The beam example

The beam is modeled with 32 uniform finite elements. It is considered failed if either
the midspan deflection exceeds 0.6 in, or the left-end moment exceeds 1,100 k - It. This
example has been adopted by Der Kiureghian and Ke [2] to illustrate the convergence of
{3 with the number of random field elements.

This problem is analyzed with six random field discretizations, containing 1, 2, 4,


8, 16, and 32 uniform random field elements, respectively. The results of the first five
analyses are then substituted into Eq. 23 to estimate the reliability index {3S2-element for
the beam with 32 random field elements. Figure 4 shows the computed and estimate
reliability indices for the six discretizations. It is seen that the reliability index converges
as the number of random field elements increases.

For the displacement limit state, the reliability indices for the cases with 1 and 32
random field elements are 2.643 and 4.162, and the corresponding failure probabilities
are 4.15 X 10- 3 and 1.60 x 10- 5 , respectively. Obviously, the random field mesh with a
single element is not acceptable. However, using Eq. 23 to update its results yields a
reliability index of 4.188 and a failure probability 1.40 X 10- 5 • These estimate values are
233

4.5
QQ-O---~-------- ---
4.0
x (a) displacement limit state
CD
-g 3.5
+ Pcomputed

~ 3.0 o Pestimate

...
CD
P32-element

2.5

number of random field elements

3.0

2.8
x -g.-e---~--------
~ 2.6
c:
>. (b) moment limit state
~ 2.4
:ca:I
~ 2.2

2.0

number of random field elements

Figure 4. Influence of random field mesh


on the reliability Index
234

very close to the values obtained by solving the refined mesh problem. The estimations
are also good for the other discretizations and the moment limit state. The estimation
errors in all cases are less than 1.5 %.

As a second example, consider a 16 X 16 square plate subject to two nodal loads, PI


and P 2 , at its upper right corner, as shown in Fig. 5. The magnitude of PI and P 2 are
assumed to be independent normals with means III = 4,000 and 112 = 1,000, respectively,
and coefficients of variation PI = P2 = 0.1. The Young's modulus of the plate is modeled
as a homogeneous, lognormal random field with a mean IlE = 1,000,000 and a standard
deviation UE = 100,000. The autocorrelation coefficient function of E is assumed to have
the isotropic form

dxTdX)
PE,E(dX) = exp ( - (0.25L)2 (26)

where .dx is the position vector between two points, and L = 16 is the width of the plate.
The Poisson ratio is deterministic and is equal to 0.2. No units are specified for the above
quantities. One may assign appropriate units to these quantities as needed.

Fy
/ node 81 ' - .
~--~~~--~~~~~~-.~
1 1 1 1 1 1 1
1 1 1 1 1 1 1
--", - - -1- - -I- - --1- __ I- _ -.j. __ -1 __ _
1 1 1 1 1 1 1
1 1 1 1 1 1 1
---:---t---:--+--t---:---t--
1 1 1 1 1 1 1
1 1 1 1 1 1 1
- -1" - - -1-- - r - -"1- --r- -- 1" - -"1- - -
1 1 1 1 1 1 1
1 1 1 1 1 1 1
16 --~---~--~--~---~--~--~---
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1
- --r --T1 --,---r--T--
1 1 1 1
-1-- 1
-r--
1 1 1 1 1 1 1
1
-- -+---1--
1
1
1
-"---1-
1
1
1
1
1 1 1
--I- - - + -- -1---
1 1 1
1 __ .11 __ J 1___ L
- __ L 1 __ .11 ___11___ L
1 __
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1

element 1
16

Figure 5. The stochastic plate


235

The plate is discretized into 64 finite elements, as shown in Fig. 5. Two failure criteria
are considered: the excess of the horizontal displacement at node 81 above a threshold of
0.03 and the excess of the tensile principal stress at element 1 above a threshold of 400.

Six discretizations of the random field in the plate are considered, containing 1, 4,
16, 19, 28, and 64 random field elements, respectively (see Fig. 6). These random field
meshes were constructed in a way similar to the refining process of the finite element
mesh. That is, the one-element mesh was constructed first. Then the mesh was refined
such that the previous mesh was contained in the refined mesh. This is required when we
examine the convergence of f3 with the increase of the number of random field elements.

The computed and estimate f3" for these meshes are shown in Fig. 7 for both the
displacement and the stress limit states. It is seen that in the displacement limit state,
f3 converges as the number of random field elements increases. In the stress limit state,
f3 also show signs of convergence, although it is not as smooth as in the displacement
limit state. This is because the stress in an element depends more on the local properties
around the element, and, thus, is more sensitive to the change of the random field mesh.

Consider the case with only one random field element. The reliability index is 1.405
for the displacement limit state, and 3.590 for the stress limit state. The respective f3's for
the 64-element case are 1.789 and 2.878. If Eq. 23 is applied to the I-element solution to
estimate f3 for the 64-element case, the reliability indices are 1.775 and 2.810, respectively,
for the displacement and stress limit states. These are, in fact, the worst cases among the
cases studied. If the coarse mesh contains more random field elements, the estimation
errors are even less, as shown in Fig. 7. Notice that the estimation is good even when
the failure probability is as high as 4.7% for the displacement limit-state.

5 Summary and Conclusion

A general approach is introduced in this paper to investigate the size effect of the random
field elements on finite element reliability analysis. An expression is derived to estimate
the reliability index of a continuum with refined random field mesh using the results of
the coarse mesh solution. The estimation is justified by a geometrical interpretation and
from an optimization point of view. The beam and the plate examples illustrate the
practicability of this approach.
236

(a) 1 element (b) 4 elements

(c) 16 elements (d) 19 elements

(e) 28 elements (f) 64 elements

Figure 6. Random field discretlzations


of the stochastic plate
237

2.0
(a) displacement limit state

x
Q)
1.8 cY -0- - - - - - -<>- ...-.---.-....- - - - - - - - - - -......
"0
c:

.~ 1.6 + ~computed
:.0
CIS o ~estimate
Q)
'- 1.4
~64-element

number of random field elements

3.8
3.6
3.4 (b) stress limit state

x 3.2
Q)
"0
c: 3.0
>- --0------ ----~---------------
.-::: 2.8
:.0
CIS
Q)
2.6
.....
2.4
2.2
2.0 0
number of random field elements

Figure 7. Influence of random field mesh


on the reliability Index
238

References

[1] A. Der Kiureghian, "Multivariate Distribution Models for Structural Reliability," in


Transactions, the 9th International Conference on Structural Mechanics in Reactor
Technology, vol. M, pp. 373-379, Lausanne, Switzerland, Aug. 1987.

[2] A. Der Kiureghian and B.-J. Ke, "The Stochastic Finite Element Method in Struc-
tural Reliability," Probabilistic Engineering Mechanics, vol. 3, no. 2, pp. 83-91, 1988.

[3] A. Der Kiureghian, H-Z. Lin, and S-J. Hwang, "Second-Order Reliability Approxi-
mations," Journal of Engineering Mechanics, vol. 113 no. 8, pp. 1208-1225, ASCE,
Aug. 1987.

[4] A. Der Kiureghian and P-L. Liu, "Structural Reliability Under Incomplete Proba-
bility Information," Journal of Engineering Mechanics, vo. 112, no. 1, pp. 721-740,
ASCE, Jan. 1986.

[5] M. Grigoriu, "Crossings of Non-Gaussian Translation Processes," Journal of Engi-


neering Mechanics, vol. 110, no. 4, pp. 610-620, ASCE, April 1984.

[6] A. M. Hasofer and N. C. Lind, "Exact and Invariant Second-Moment Code Format,"
Journal of Engineering Mechanics, vol. 100, no. 1, pp. 111-121, ASCE, Feb. 1974.

[7] T. Hisada and S. Nakagiri, "Role of the Stochastic Finite Element Method in Struc-
tural Safety and Reliability," in Proceedings, 4th International Conference on Struc-
tural Safety and Reliability pp. 385-394, Kobe, Japan, May 1985.

[8] M. Hohenbichler and R. Rackwitz, "Non-Normal Dependent Vectors in Structural


Safety," Journal of Engineering Mechanics, vol. 107, no. 6, pp. 1227-1238, ASCE,
Dec. 1981.

[9] M. Lawrence, "Basis Random Variables in Finite Element Analysis," International


Journal for Numerical Methods in Engineering, vol. 24, pp. 1849-1863, 1987.

[10] P-L. Liu and A. Der Kiureghian, "Finite-Element Reliability Methods for Geometri-
cally Nonlinear Stochastic Structures," Report No. UCB!SEMM-89!05, Department
of Civil Engineering, Division of Structural Engineering, Mechanics, and Materials,
University of California Berkeley, CA, Jan. 1989.

[11] W. K. Liu, T. Belytschko, and A. Mani, "Random Field Finite Elements," Interna-
tional Journal for Numerical Methods in Engineering, vol. 23, pp. 1831-1845, 1986.

[12] A. Nataf, "Determination des Distribution dont les Marges Sont Donees," Comptes
Rendus de l'Academie des Sciences, vol. 225, pp. 42-43, Paris, 1962.

[13] R. Rackwitz and B. Fiessler, "Structural Reliability Under Combined Load Se-
quences," Computers and Structures, vol. 9, pp. 489-494, 1978.
239

[14] P. D. Spanos and R. Ghanem, "Stochastic Finite Element Expansion for Random
Media," Report NCEER-88-0005, March 1988.

[15] E. H. Vanmarcke and M. Grigoriu, "Stochastic Finite Element Analysis of Simple


Beams," Journal of Engineering Mechanics, vol. 109, no. 5, pp. 1203-1214, ASCE,
Oct. 1983.

[16] M. Shinozuka and G. Dasgupta, "Stochastic Finite Element Methods in Dynam-


ics," in Proceedings, 3rd ASCE EMD Specialty Conference on Dynamic Response
of Structures, pp. 44-54, University of California at Los Angeles, Los Angeles, CA,
1986.
RELIABILITY-BASED OPTIMIZATION USING SFEM

Sankaran Mahadevan* & Achintya Haldar**


*Department of Civil & Environmental Engineering
Vanderbilt University, Nashville, TN 37235, USA
**Department of Civil Engineering & Engineering Mechanics
University of Arizona, Tucson, AZ 85721, USA

Introduction

The problem of reliability-based optimum design of realistic structures requires the reso-
lution of several issues, some of which are as follows. (i) Most practical structures have compli-
cated configurations, and their analysis has to be done using computer-based numerical pro-
cedures such as finite element analysis; for such structures, the response is not available as a
closed-form expression in terms of the basic parameters. As a result, earlier methods of reliabil-
ity analysis and reliability-based design are not convenient for application to large structures.
(ii) Several stochastic parameters not only have random variation across samples but also
fluctuations in space; i.e., they may be regarded not simply as random variables, but as random
fields. This complicates the reliability analysis and subsequent design even further. (iii) The sto-
chastic design variables may have different types of distributions (several of them being Log-
normal, Type I Extreme Value etc.). Also, there may be statistical correlations among the design
variables. Such information has to be rationally incorporated in the optimization. (iv) Two types
of performance have to be addressed: one at the element level and the other at the system level.
Consideration of element level reliabilities in design optimization ensures a distribution of
weight such that there is uniform risk in the structure, whereas the consideration of system relia-
bility accounts for interacting failure modes and ensures overall safety. Therefore an optimiza-
tion algorithm that considers both types of reliability is desirable.
The Stochastic Finite Element Method (SFEM) appears capable of efficiently solving the
aforementioned problems. Given a probabilistic description of the basic parameters, SFEM is
able to compute the stochastic response of the structure in terms of either the response statistics
such as mean, variance, etc. or the probability of failure considering a particular limit state. This
is done by keeping account of the variation of the quantities computed at every step of the deter-
ministic analysis, in terms of the variation of the basic variables. This capability makes SFEM
attractive for application to reliability-based optimum design of large structures. Such an optim-
242

ization procedure is presented in this paper using SFEM-based reliability analysis, and illus-
trated with the help of a numerical example.

Reliability Analysis With SFEM

In the Advanced First Order Second Moment Method, a reliability index 13 is obtained as
13 = (y *t y *)1/2 where y * is the point of minimum distance from the origin to the limit state sur-
face G(Y) = 0, where Y is the vector of random variables of the structure transformed to the
space of reduced variables. In this formulation, G (Y) > 0 denotes the safe state, and G (Y) < 0
denotes the failure state. The probability of failure is estimated as PI = $(-13), where cp is the
cumulative distribution function of a standard normal variable. Earlier algorithms for reliability
analysis in this method solved the limit state equation at each iteration point to find 13, which
limited their application to simple problems where the limit state was available as a closed-form
expression in terms of the basic random variables. Alternatively, ope could use the recursive for-
mula of Rackwitz and Fiessler [1] to evaluate y *:

(1)

where VG(yj) is the gradient vector of the performance function at Yj, the checking point in the
ith iteration, and Clj = -VG(yj) I I VG(yj) I is the unit vector normal to the limit state surface
away from the origin. Since this method uses only the value and the gradient of the perfor-
mance function at any iteration point and does not require the explicit solution of the equation
G(yj) = 0, it can be used for structures whose performance function is not available in closed
form. While G(yi) is available from the usual structural analysis, VG(yj) is computed using
SFEM.
In SFEM, the computation of VG(yj) is achieved by using the chain rule of differentiation
[2], through the computation of partial derivatives of quantities such as the stiffness matrix,
nodal load vector, displacement-to-generalized response transformation matrix etc. with respect
to the random variables. This finally leads to the computation of the partial derivatives of the
response as well as of the limit state with respect to the basic random variables X, resulting in
the estimation of the failure probability. The detailed implementation of this approach is
described in [2,3]. Thus the first problem identified above in reliability-based optimization of
realistic structures is solved. The second problem - non-normality of some of the random vari-
ables - is handled by transforming all the random variables to equivalent normal variables.
This can be achieved in a general way using the Rosenblatt transformation [4], or specifically by
matching the probability density function (pdf) and the cumulative distribution function (cdf) of
243

the non-nonnal variable at each iteration point y. with those of an equivalent normal variable
[1].
Many structural parameters exhibit spatial fluctuation in addition to random variation
across samples. Examples of such parameters are distributed loads and material and sectional
properties that vary over the length of a beam, or over the surface of a plate etc. Such quantities
need to be expressed as random fields. In SFEM-based reliability analysis, these random fields
can be discretized into sets of correlated random variables [5]. However, this greatly increases
the size of the problem. To maintain computational efficiency, sensitivity analysis can be used to
measure the relative influence of the random variables on the reliability index; only those vari-
ables that have a high influence need to be considered as random fields [3]. In fact, the random-
ness in variables with very little influence may altogether be ignored in subsequent iterations of
the reliability analysis. Further, mesh refinement studies have been carried out to minimize the
number of discretized random variables to effectively represent the random fields [3]; this
further improves the computational efficiency.

Optimization Algorithm

Any optimization procedure has three aspects: objective function, constraints, and the
algorithm to search for the optimum solution. Different objective functions have been used in
reliability-based optimization studies in the past, such as minimization of weight (e.g. Moses
and Stevenson [6]), minimization of cost (e.g. Mau and Sexsmith [7] etc.), and minimization of
the probability of failure (e.g. Nakib and Frangopol [8]). Multi-objective, multi-constraint
optimization techniques have also been used (e.g. Frangopol [9]). In this paper, a simple and
convenient objective function, the minimization of the total weight of the structure, is chosen. It
will be apparent later that the choice of other objective functions mentioned above will not
affect the general applicability of the proposed method.
All the constraints to be considered here are related to the reliability of the structure. Two
types of reliability constraints can be used: component reliability and system reliability. The
fonner measures the reliability of the individual components corresponding to various limit
states while the latter accounts for simultaneously active individual failure modes and measures
an overall failure probability of the system. The use of component reliability closely resembles
the approach used in design offices, namely the proportioning of individual members based on
the forces acting on them. It also facilitates control at the element level, and helps to ensure uni-
fonn risk in the structure. The use of system reliability as the only constraint ensures overall
safety of the structure, but it is difficult to estimate for large, realistic structures with the present
state of the art, and may result in nonunifonn risk for different members. In this paper, the
optimum design is defined as that in which the reliability indices corresponding to all the
244

element-level limit states are within a desired narrow range. At each step, the system reliability
constraints are checked to make sure that the overall failure probability is less than the desired
value.
The element reliability constraints are written as

~f ~ ~i ~ ~r ' i =1,2, ...,m (2)

where the lower bound ~f specifies the minimum required safety level for the ith limit state,
while the upper bound M indicates the desired range of ~i' and m is the number of limit states.
The optimum design is said to be reached if all the ~i values fall within the desired range.
Some element-level reliability constraints may simply require the satisfaction of the limit
state equation at the nominal values, as in the case of code-specified serviceability criteria. Such
constraints may be written as

gj ~ 0.0, j = 1,2, ... ,1 (3)

where gj is the performance function for the j such limit state and 1 is the number of such limit
states.

The system reliability constraints are written as

(4)

where PI is the overall failure probability of the structural system, which is required to be less
than an acceptable value pJ.
The reliability indices corresponding to all the element-level limit states are obtained
using the SFEM-based reliability analysis described earlier. The system reliability constraints
may be evaluated using any of the well-known methods [10]. Since system reliability is used
only to check the feasibility of a design, an approximate but fast method such as the use of
Cornell's upper bound may also be considered adequate.
The feasible region for the design is defined by Eq. (3) and by the lower bounds of Eq.
(2), indicating the acceptable level of risk for each element-level limit state, and by Eq. (4), indi-
cating the acceptable risk at the system level. Reliability-based design formats such as LRFD are
derived based upon this idea of acceptable risk; the load and resistance factors correspond to
prespecified target values of ~. Thus one may select the lower bounds of Eq. (2) same as the tar-
get ~ values used in reliability-based design codes. The lower bounds of Eq. (4) need to be
based on experience regarding acceptable level of system reliability. The upper bounds of ~ in
245

Eq. (2) are established such that the P values of different elements fall within a narrow range to
assure uniformity in the risk levels. Referring to Eq. (2), it can be seen that it is also possible to
specify different desired risk levels for different limit states, thus accounting for the fact that all
the limit states may not have equal importance.
Starting with a feasible trial structure, the algorithm achieves uniform risk within the
feasible region, by searching only in the direction of reducing P values. This means that the
algorithm needs to examine only those configurations whose member sizes are less than those of
the trial structure. Any movement produces a reduction in weight, resulting in minimum weight
for the optimum solution. If the new design still satisfies the lower bounds of Eqs. (2),(3) and
(4), it is accepted as a success; otherwise it is rejected as a failure and the step size is halved in
that direction until no significant improvement is possible.
The convergence of the algorithm is accelerated by using discrete step sizes which are
determined by different ranges in the values of (Pi - pb at any iteration. For example, one may
choose step sizes as

Ll = 0.3 for Pi - p~ ~ 2.0


= 0.2 for 1.0:S; Pi - p~ < 2.0
= 0.1 for 0.25 < Pi - p~ < 1.0

Such a method is easy and fast to implement; even though it is an approximate rule, it is
sufficient since the purpose of the algorithm is not to find an absolute optimum but only to
ensure that all the Pi values are within a desired range. Furthermore, it also allows the use of
different step sizes in different directions. The search is stopped when either all the P;'s are
within the desired range or when the smallest step size in every coordinate direction is smaller
than a prescribed tolerance level. Before beginning the optimum design algorithm, a feasibility
check may be made; if the trial structure is infeasible, then a feasible starting point may be
achieved by simply reversing the search directions and using only the lower bounds of the con-
straints.

Numerical Example

A steel portal frame, shown in Fig. 1, is subjected to a lateral load H and a uniformly dis-
tributed vertical load W. There are five basic random variables in this structure, whose statistical
description is given in Table 1. The design variables are the plastic section modulus (Z) of the
various members. The area (A) and moment of inertia (I) are related to Z through the following
expressions derived using regression analysis (refer to [11] for details):
246

A = -6.248 + 1.211 Z2/3 (5)

I =20.36 + 22.52 A + 0.22 A 2


The two columns have identical cross-sections. The uniformly distributed load W is treated as a
random field and discretized into four elements, using the spatial averaging method [3]. The
autocorrelation function for the random field is assumed to decay exponentially with a correla-
tion length equal to one-fourth the length of the beam. The minimum weight design of this
structure is required to satisfy the following reliability constraints:

Element-level Strength Limit States The following performance criterion (combined axial
compression and bending) is observed to be critical for all three members:

P C M
m S 1.0 (6)
Pu Mp (1 -PIPE)
where P is the applied axial load on the member, M is the applied bending moment, Puis the
ultimate axial load that can be supported by the member when no moment is applied, P E is the
Euler buckling load in the plane of M, Py = A Fy, where Fy is the yield strength, Mp = Z Fy is
the plastic moment capacity and Cm is as defined in the AISC LRFD Specifications [12]. For all
three members in the frame, Cm =0.85 is used. Of the two columns, the one on the right is
found to be critical.
,
-
w
+ ! ! ! ! ! ! ! ! !
2 2 3
H
22

15'

30'

Fig, 1. Numerical Example-Steel Portal Frame

No. Symbol Units Mean Coefficient T~of


of Variation Dis bution
1 H kips 5.0 0.37 TypeJ
2 W klft 2.4 0.20 Lognormal
3 Fy ksi 38.00 0.11 Lognormal
4 ZI in 3 132.0 0.10 Lognormal
5 Z2 in 3 132.0 0.10 Lognormal

Table 1. Description of Basic Random Variables


247

The reliability constraint corresponding to this performance criterion for the critical column and
the beam is given by

(7)

Element-level serviceability limit states The design is also required to satisfy two servicea-
bility constraints. The limiting vertical deflection at the midspan of the beam = span/240, while
the limiting side-sway at the top of the frame = height/400. In the present example, it is required
for the sake of illustration that the serviceability limits be satisfied at the mean values of the ran-
dom variables; thus no reliability ranges are defined. Therefore these two constraints are written
as

_ 1 0 _ midspan deflection of beam > 0 0 (8)


g3 - . spanl240 - .

= 1 0 _ sidesway at the top ... 0 0


g4 · heightl400 .:;. (9)

System reliability The overall probability of plastic collapse of the frame is considered for
system reliability. This is computed as described in [13], considering ten possible collapse
modes of the frame and finding the probability that at least one of the ten possible collapse
modes will occur. Cornell's upper bound is used as an approximation for the system failure pro-
bability, for the sake of illustration. That is,

(10)

where PI is the overall failure probability, and Pli is the failure probability of the individual
mode i, and m is the number of failure modes. The corresponding system reliability constraint is
written as

PI S; 10-5 (11)

The step sizes for the optimization algorithm are as shown in Eq.(5). Mahadevan and Haldar
[11] discussed elsewhere in detail the practical implementation of the proposed optimization
procedure to satisfy the aforementioned constraints, and presented several strategies to improve
computational efficiency.
248

Results and Observations

Table 2 traces the steps in the proposed reliability-based weight-minimization procedure


for designing the portal frame. In step 3, the solution is infeasible due to the violation of the sys-
tem reliability constraint. At this point, the step sizes are halved and the search is continued.
However, the algorithm ends without achieving the element-level Ws within the desired range,
in order to satisfy system reliability. Thus system reliability is more critical than the element
reliabilities in this example. The authors have shown other examples elsewhere [11,14] in which
element reliabilities govern, and enable the achievement of the ~'s within the desired range.
Thus both types of reliability can be incorporated in structural optimization through the pro-
posed algorithm.

Conclusions

A procedure for the reliability-based optimization of structures without closed-form solu-


tions has been presented in this paper. The Stochastic Finite Element Method is used for the reli-
ability analysis of such complicated structures. The method is able to include consideration of
non-normality in the stochastic design variables, and statistical correlations among them. Param-
eters that need to be modeled as random fields are discretized into sets of correlated random
variables and used in SFEM-based reliability analysis. Sensitivity analysis and mesh refinement
studies help to maintain computational efficiency while considering random fields.
The optimization algorithm includes both element-level and system-level reliabilities as
constraints, ensuring uniform risk across the structure for all the elements as well as safety
against overall failure. The use of a desired range of ~ values provides a clear idea about the
optimum solution. An absolute optimum is not required in this formulation; any feasible design
that has the ~ values within the desired range is regarded as optimum. This helps to make the
search algorithm fast and simple, with the use of variable discrete step sizes based on the dis-
tance of the design at any iteration from the optimum design. The method also has the flexibil-
ity to add or delete limit states as desired, making it attractive for practical application.

Acknowledgements

This paper is based upon work partly supported by the National Science Foundation
under Grants No. MSM-8352396, MSM-8544166, MSM-8746111, MSM-8842373, and MSM-
8896267. Financial support received from the American ~lStitute of Steel Construction, Chicago
is appreciated. Any opinions, findings, and conclusions or recommendations expressed in this
publication are those of the authors and do not necessarily reflect the views of the sponsors.
249

Plastic System
Iteration Section Element-level Collapse
No. Modulus (in3) p's andg's Probability Feasibility

= 132.0
Zl ~l =5.25
~= 132.0 ~2 =5.70 2.61 x 10-8 Yes
g3 =0.78
g4 =0.51

Zl =92.4 ~l = 3.91
2 Z2 = 132.0 ~2 =6.11 5.26 x 10-8 Yes
g3 =0.71
g4 = 0.43

Zl =92.4 ~l = 3.55
3 Z2 =92.4 ~=4.10 1.72 x 10-4 No
g3 =0.67
g4 =0.28

Zl =92.4 ~l = 3.72
4 ~ = 112.2 ~2 = 5.17 2.54 x 10.6 No
g3 =0.69
g4 = 0.37

Zl = 87.8 ~l = 3.53
5 ~= 112.2 ~2 =5.23 3.37 x 10-6 Yes
g3 =0.69
g4 =0.35

Zl = 87.8 ~l = 3.38
6 ~=95.4 ~2 =4.32 1.09 x 10-4 No
g3 =0.67
g4 =0.28

= 87.8
Zl ~l = 3.45
7 ~= 103.8 ~2 =4.79 1.86 x lQ-s No
g3 =0.67
g4 =0.32

Step sizes too small; no more improvement done.


Optimum Solution: Zl = 87 .8, ~ = 112.2

Table 2. Steps in minimum weight design


250

References

1. Rackwitz, R. and Fiessler, B. Structural reliability under combined random load sequences,
Computers and Structures, Vol. 9, pp. 489-494,1978.
2. Der Kiureghian, A. and Ke, J.B. Finite element-based reliability analysis of framed struc-
tures, in Structural Safety and Reliability (Eel. Moan, T., Shinozuka, M. and Ang, A.H-S.)
pp. 1-395 to 1-404, Proceedings of the 4th Int. Conf. on Structural Safety and Reliability,
ICOSSAR'85, Kobe, Japan, 1985. Elsevier, Amsterdam, 1985.
3. Mahadevan, S. Stochastic Finite Element-Based Structural Reliability Analysis and Optimi-
zation. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, 1988.
4. Rosenblatt, M. Remarks on a Multivariate Transformation. Annals of Mathematical Statis-
tics, Vol. 23, No.3, pp. 470-472, 1952.
5. Vanmarcke, E.H., Shinozuka, M., Nakagiri, S., Schueller, G.I. and Grigoriu, M. Random
Fields and Stochastic Finite Elements, Structural Safety, Vol. 3, pp. 143-166, 1986.
6. Moses, F. and Stevenson, J.D. Reliability-Based Structural Design, Journal of the Struc-
tural Division, ASCE, Vol. 96, No. STI, pp. 221-244,1970.
7. Mau, S-T. and Sexsmith, R.G. Minimum Expected Cost Optimization, Journal of the Struc-
tural Division, ASCE, Vol. 98, No. ST9, pp. 2043-2058, 1972.
8. Nakib, R. and Frangopol, D.M. Reliability-Based Analysis and Optimization of Ductile
Structural Systems, Structural Research Series 8501, Department of Civil, Environmental
and Architectural Engineering, University of Colorado, Boulder, 1985.
9. Frangopol, D.M. Multicriteria Reliability-Based Structural Optimization, Structural Safety,
Vol. 3, pp. 154-159, 1985.
10. Thoft-Christensen, P. and Murotsu, Y. Application of Structural Systems Reliability
Theory. Springer-Verlag, Berlin, 1986.
11. Mahadevan, S. and Haldar, A. Efficient Algorithm for Stochastic Structural Optimization,
Journal of Structural Engineering, ASCE, Vol. 115, No.7, pp. 1579-1598, July 1989.
12. American Institute of Steel Construction. Manual of steel construction: load and resistance
factor design. Chicago, 1986.
13. Frangopol, D.M. Computer-Automated Sensitivity Analysis in Reliability-Based Plastic
Design, Computers and Structures, Vol. 22, No. I, pp. 63-75,1986.
14. Mahadevan, S. and Haldar, A. Stochastic Finite Element-Based Optimum Design of Large
Structures, in Computer-Aided Optimum Design of Structures: Applications (Eels. Brebbia,
C.A. and Hernandez, S.), Computational Mechanics Publications, Southampton, pp. 265-
274,1989.
CLASSIFICATION AND ANALYSIS OF UNCERTAINTY
IN STRUCTURAL SYSTEMS

William Manners
Department of Engineering, University of Leicester
Leicester, LEI 7RH, U.K.

INTRODUCTION
Decisions about engineering design, construction, maintenance and
repair have to be taken in spite of a lack of complete knowledge about
all the factors that ought to influence such decisions. Often the only
information available which sheds any light on the magnitude of the un-
certainties present was created or collected for different purposes.
For instance, data on the strength of structures, or elements of
structures, is usually only available from tests made to determine or
validate design rules. In such experiments the aim is to cover as wide
a range of different examples as possible, with the result that there
is not the repetition of notionally identical tests that is the basis
of classical statistics. Nonetheless such data do contain information
relevant to the assessment of uncertainty.

The aim of this paper is to provide a structure ~or describing and


dissecting the uncertainty present in the variables that govern the
behaviour of a structural system. Even when a variable is to be des-
cribed by a single probabilistic model, it is still useful to know the
different components of the uncertainty which is to be described by the
model, and the model is likely to be more accurate when the nature of
the uncertainties is known and taken into consideration. More import-
antly, there are circumstances where the separation of uncertainties is
vital; for example to assess the correlation of variables in a complex
system, or to implement der Kiureghian's proposed method of determining
safety measures (1).

FORMULATION OF RELIABILITY CALCULATIONS

The traditional basic scheme for a reliability calculation


expresses the Margin of Safety as a function of a set of variables for
which probabilistic descriptions are available, and from this the
probability of failure is calculated. This can be thought of as one
252
example of a more general task, namely that of finding a probabilistic
description of a variable (z), given information about its dependence
on other variables (~) and probabilistic descriptions of those variables,
fx(~). If the dependence can be expressed as a function of g(.) then
the task can be expressed as: given z = g(~) and fx(~)' find FZ(Z)'
and the solution can be expressed in the form:

FZ(Z) - J.... JfX(~)d~ [ 11


g(~)<z -

provided that the function g(.) is such that increasing Z always en-
closes an increased volume of the ~-space. If the dependence involves
a variety of possible sequences of events, as in the case of partial
failures of a structural system, then, of course, the problem cannot
be expressed as simply as equation [11, but the same concepts are
still involved.

The variables involved in these calculations can be divided into


three groups. First, some (probably most) of the variables in X
represent directly measurable physical quantities, such as dimensions
or material properties, and hence their probabilistic descriptions can
be derived without reference to the other variables and the function
g(). The word 'can' is important; there are arguments (e.g. those of
Lind and Chen (2)) that such models should be influenced by the
ultimate purpose of the whole set of calculations.

The remainder of the variables in ~ represent uncertainties in the


models used in formulating the function g(). If these are empirical
models, then it is usually most convenient to use the empirical con-
stants of the models as the variables. Where theoretical models are
used additional variables can be introduced to express the variations
between the predictions of the model and the observed behaviour of the
real structure.

The simple formulation is often extended to allow a third group


of variables to be explicitly included. These are the parameters (0)
of the probabilistic models for the ~ variables, and can themselves be
given a probabilistic model. In theory these models in turn have
parameters which could have probabilistic descriptions and so on ad-
infinitum, but, assuming there is no value in extending the calcula-
tions in this way the task of a reliability calculation can be re-
expressed as: given Z g(~), f~I~(~/~) and f0(~), find FZ(z).
253

The foregoing section gives us a basic classification of un-


certainty based on its location in the formulation of the problem, that
is whether it is to be modelled by:
(1) the conditional distribution of a basic variable in ~, or
(2) the distribution of a probabilistic model parameter in ~; and
(1) can be subdivided into:
(la) those basic variables that represent measurable physical parameters,
(Ib) those basic variables that represent behaviourable model parameters.

SOURCES OF UNCERTAINTY

The alternative method of classifying uncertainty is by its source.


Broadly, one could say that for the three locations:
(Ia) the uncertainty in physical parameters arises from the variability
of nature (inherent variability)
(Ib) the uncertainty in behavioural models (model uncertainty) arises
from lack of knowledge which makes it impossible to create exact
models for structural behaviour (ignorance) and, usually, the need
to use models which are simpler than the behaviour they represent
(error of simplification).
(2) the uncertainty in probabilistic models arises from lack of know-
ledge (i.e. inadequate statistics) about the uncertainty that is
being modelled (estimation error) .
This division (and the terminology) is the basis of the approach of der
Kiureghian (1) who then groups (Ib) and (2) together as being due to
lack of knowledge, and therefore reducible by additional study, while
(Ia) is inherent in nature and unavoidable. These distinctions are not
clear cut, however. In particular, it is likely that inherent vari-
ability in the variables used in a model may be difficult to dis-
entangle from the variability of the model itself.

Other sources of uncertainty include the process of taking and


measuring samples. This will tend to increase the uncertainty that
appears to be present. Human errors, particularly 'gross' errors, are
the principal cause of structural failure.

The purpose of the remainder of this paper is to discuss in more


detail the nature of some of the uncertainties present in structural
systems, concentrating in particular on the concepts of inherent
variability and behavioural model uncertainty.
NATURAL VARIABILITY

certain variables, such as wind speed, do directly measure the


variability of nature. There is a paradox here however in that wind
results from the rotation of the earth and its movement round the sun
(extremely predictable movements), operating through the strictly
deterministic Navier-Stokes equations applied to the earth's atmosphere.
Generations of weather forecasters have implicitly assumed that the un-
predictability of the weather resulted from the complexity of the system
and could be reduced indefinitely as the supply of data and speed of
computation was increased. Lorenz (3) however showed that unpredict-
ability is inherent in the non-linearity of the equations. His work,
and much work on other non-linear equations are now grouped together
under the subject heading "chaos" (4), (5); the essential characteristic
of a chaotic system is that its behaviour is extremely (even infinitely)
sensitive to the initial conditions, so that no measurement of the
present state of the system can be sufficiently accurate to predict its
behaviour indefinitely. The behaviour does, however, possess a detailed
structure that can be described topologically in phase-space. Most of
the burgeoning research effort in this area has necessarily gone into
determining these structures rather than describing statistically the
uncertainty that exists within them, but it has certainly been argued
that the behaviour of chaotic systems, as they develop, approaches that
of the abstract concept of 'randomness'. (see e.g. 6).

MANUFACTURING PROCESSES

In many calculations, most variables represent the outcome of


processes of manufacture, fabrication, construction, etc., Such pro-
cesses start with naturally variable raw materials, but these are then
processed, one of the subsidiary aims of the processing being to reduce,
or at least control, the variability. The rejection of material is, of
course, wasteful, and so the trend in manufacturing industry is strongly
towards the development of manufacturing systems which produce in-
significant levels of failures at final inspection. As well as the
increased use of conventional feedback systems, other developments
include the off-line quality control methods of Taguchi (7).

It is probably reasonable to characterise a broad range of manu-


facturing processes by saying that the scatter in the values of some
measured property of the final product is an intrinsic feature of the
process and machinery used, whilst the mean value is set to keep
255

rejections to an acceptable level. For instance, the variation in


thickness of rolled steel plate is governed by the characteristics of
the rolling mill; older less accurate mills have to roll to larger
average thicknesses than modern ones, for the same nominal thickness,
if the most important acceptance criterion is the achieving of a
specified minimum thickness.

This viewpoint gives an interesting perspective on the argument of


Lind and Chen (2). They argue for a consistency principle which states
that a calculated probability of failure should not increase if one of
the sample values in the resisting variable data is increased. This
can however happen if the sample is a high value and conventional, un-
biased, estimators of mean and standard deviation are used as the way
of taking information from the samples into the reliability calculation.
However, on the argument above, an excursion from the mean in either
direction is an indication of the quality of the manufacturing process
which could equally well have produced an excursion in the opposite
direction, and hence an increase in the calculated probability of
failure is a valid rational response.

It is clear that a product of a manufacturing and construction


process will owe its final variability to a combination of natural
variability and interventions by machines and humans; and the behaviour
of the machines will itself be the outcome of similar processes.

MODELS OF STRUCTURAL BEHAVIOUR

The nature of the uncertainties represented by behavioural model


parameters (type Ib in the discussion above) can be explored by con-
sidering a variable z which represents, say, the strength of a
particular type of structural element. A calculation model g(.) is
available which gives values for z, approximately, in terms of measure-
able properties of the element. These form a set of basic, type la
variables, ~l' To enable the model to represent the uncertainty in
the element strength additional type Ib variables ~, are used in the
model so that

[2]

As discussed above, the model uncertainty variables, 1, may be either


empirical constants in g(.), treated as probabilistic variables, or
additional variables introduced into a conventional formula. Commonly
only one type Ib variable is used in a given formula and it is taken
256

as a factor on the value of z given by other means, i.e.

[3]

In the design process, the structural designer will specify nominal


values for the variables ~l' written ~~. Frequently a fully detailed
design will contain far more dimensions and other requirements than are
given in ~~, and many of these will have some influence on the actual
value of z. Furthermore the fabricator or contractor who builds a
structure will make decisions within the discretion allowed by his
contract which influence the actual strength z of the element in
question (the choice of welder is a typical example of this), and then
there are random events which affect z but which are not controlled by
anyone's decisions. We can therefore say that the true value of z, in
any particular case is a function, which is of course unknown, of all
these effects:

[4]

where superscript T indicates the true or actual value and -


~l represents the decisions made by the designer which are included
in the calculation model.
~2 represents decisions made by the designer which influence z, but
which are not explicitly included in the calculation model.
~3 represents decisions made by the contractor which influence the
value of z.
~4 represents anything that can effect z, but which the design and
construction process does not attempt to control.
x is a vector containing all the above.

If the design and construction process is rational, these variables


should be of diminishing importance in the order given. In particular
~4 is included for the sake of completeness rather than its real
significance, most of the variability in z will usually come from the
other variables.

Data for the model uncertainties ~ are obtained by constructing


examples of the element in question and testing them to find zT. If a
single model uncertainty factor is used (as in equation 3) then the
true value of ~ for any particular test is given by -

[5]
257

or [6]

depending on whether the calculation of g(.) is performed using the


x N) which were used in designing the test,
nominal values of -xl (written -1
or using the true measured values of ~l. It is surprising how often
authors of papers proposing calculation models, and justifying them by
comparing the predictions of models with reality, do not say whether
the predictions of the model were made with measured or nominal values
of, say, yield stress.

Taking equation [6], assuming that the differences between the true
and nominal values of the variables are small gives -

_ gT(aT )
~
g(~~)
[7]
gT(,.N) 1 agT(x) T N
+ --- E (xi - xi)
g(~~) g(~~) i aX i

where the summation is over all the variables in ~l' ~2' ~3' and ~4'
and agT~xi is evaluated at nominal values of all the variables.

In some cases the variables can be defined so that the relationship


is linear by definition, and the form above is not an approximation.
The ~4 variables do not, by definition, have a 'nominal' value, i.e.
one fixed by the designer or contractor; the nominal value above is
probably the most conveniently thought of as being the mean value.

The representation of choices made by designers and contractors by


continuous variables can be demonstrated by considering the effect of
the choice of welder. A variable in ~3 would represent that part of
the variability of weld quality due directly to the skill and experience
of the welder. The nominal value indicates the expected quality to be
achieved from the chosen welder; the true value is the actual quality
achieved on the occasion the item in question was fabricated. Clearly
it will rarely be possible to actually formulate models and variables
to this degree of detail. The purpose of this analysis is to layout
the effects systematically, rather than to produce an immediately
applicable methodology.

It is clear from equation [7] that for a set of nominally identical


cases the expectation and variance of ~ derive very simply from the
expectations and variances of the basic variables. One would expect
258

that the uncertainty in ~ would be reduced if it were defined by


equation [5] rather than [6], i.e. using true values of ~l in the
calculation model, and this can be explored by writing -

[8]

where ai is given by evaluating, at ~N,

[9]

Hence for i in ~2' ~3' ~4' as before

[10]

but for i in ~l'

[ 11]

which is zero if

1 agT(x) 1 ~1)
[ 12]
gT(x N ) aX i g(~~) aX i

Hence if the available model g(.) represents the unknown, true behaviour
gT(.) so that this equation is satisfied, then the effect of the
variation between true and nominal values of ~l variables will dis-
appear from the model uncertainty.

An alternative formulation for ~ is to take it as an additive


element rather than a mUltiplying factor, and a problem can be trans-
formed from the latter to the former by taking logs and renaming the
variables.
259
In this case equation [6] becomes -

T T T T T
z - ~ + g(~1) .. g (~ ) [13]

and hence

[14]

where ai = agT(~)/axi - ag(~l)/axi evaluated at the nominal values of


x and hence for ~1 variables ai = 0 if agT(~)/axi = ag(~l)/axi and for
the others ai = agT(~)/axi.

In the typical circumstances outlined in the introduction there


is little or no data consisting of repetitions of the same nominal case.
In a mixed population of designs we can take the expansion about the
mean values m of the variables in the population giving, for the
additive model uncertainty -

~T E gT(~) - g(~1)
T
+ 1: ai(x i
i
- mil

.. gT(m) - T
g(~1) + 1: ai(x i
i
N
- xi) + 1: ai(x i
i
N
- mil
[15]

showing that, as long as the linear relationships are valid, the vari-
ability in ~ can be attributed separately to the variability between
true and nominal values and the variability of nominal values within
the given population. In general, the greatest difficulty of such an
analysis is knowing whether there are any mixed populations in the
contractor-controlled ~3 variables. Normally it has to be assumed
that there are not; any that exist may then show themselves as anomalies
in the data and results. For the "ideal" case where the model is good
(i.e. ai = 0 for ~l) and where there are no mixed populations in ~3'

and remembering that x~ = mi for ~4 variables, then the only items con-
tributing to the second summation in equation [15] (i.e. the variations
between nominally identical cases) are the ~2 variables.
260

CONCLUSIONS
The uncertainties present in structural systems can be classified
and described either by their location in the formulation of a problem,
or by the source of the uncertainty.

Natural variability often arises from apparently deterministic


systems in nature~ the theory of chaos is beginning to illuminate the
relationship between natural variability and the abstract concept of
probabilistic randomness.

The variabilite of manufactured products has extremely complex


origins which means that apparently rational statements about it may
be incorrect in some instances.

A model has been demonstrated which attempts to reveal the role


played in the uncertainty observed in behavioural model parameters by
the various possible sources of uncertainty.

REFERENCES
1. Der Kiureghian, A. "Measures of Structural Safety under Imperfect
States of Knowledge"., J.Struct.Eng., ASCE, 1989. 1119-1140.

2. Lind, N.C., and Chen, X., "Consistent Distribution Parameter


Estimation for Reliability Analysis"., Structural Safety, 4,
1989. 141-149.
3. Lorenz, E.N., "Deterministic Nonperiodic Flow", J.Atmospheric
Sciences, 20, 1963. 130-141.

4. Gleick, J., "Chaos", Heinemann, 1988.

5. Thompson, J.M.T. and Stewart, H.B., "Nonlinear Dynamics and


Chaos", John Wiley, 1986.

6. Ford, J., "How random is a coin toss?", Physics Today, 36,


April 1983, 40-47.
7. Taguchi, G., et al. "Quality Engineering in Production Systems"
McGraw-Hill, 1989.
DIRECTIONAL SIMULATION FOR
TIME-DEPENDENT RELIABILITY PROBLEMS

Robert E. Melchers
Department of Civil Engineering and Surveying
The University of Newcastle, N.S.W., 2308, Australia

ABSTRACT

Whilst there has been significant progress in the assessment of structural


reliability for complex structures (e.g., offshore oil platforms) in the time
independent domain, significant problems remain in dealing with structures subject to
time varying loading. The crux of the problem lies in the fact that the loads are
time dependent (e.g., wave loading, wind loading, etc.) but that the material
properties can usually be considered to be essentially time independent. A procedure
utilising refined Monte Carlo simulation in the hyper-polar co-ordinate space of the
load processes is described herein.

Conventional structural analysis deals with structures for which all loading is
assumed to be a function of one independent parameter, that is, the loading is
applied proportionally. In the context of structural reliability calculations, this
means that the analysis is "load path independent". Thus the limit states for the
structural system are assumed not to change as the loading is applied. This will be
assumed to be the case also in this paper.

Modelling of loading usually has been as a time-invariant random variable, for


example, as the pdf of maximum value during the life of the structure. This approach
has been termed the "time-integrated" approach (Melchers, 1987). However, when more
than one load acts on the structure, and the loads are not fully mutually dependent,
this approach fails to represent the problem realistically. The so-called
"load-combination" problem may be invoked for linear structural systems, but more
generally, stochastic process modelling is required to consider the time-dependent
-nature of the load processes (Veneziano et. al., 1977). Attempts to do this in a
262

First-Grder Second Moment framework exist (e.g .. Madsen and Zadek. 1987; Wen and
Chen. 1987. 1989). The present paper. based on a research report (Melchers. 1989).
describes a means of solving the problem using the recent generalisation of
directional simulation (Di tlevsen et. al.. 1989; Melchers. 1990). As in these
works. the problem is formulated in the hyper-polar space.

In the present paper the space of the load processes is used: the load variable
space has been previously used (e.g .. Schwarz. 1980; Lin and Corotis. 1985).

PROBLEII FORJlULA.TI(l'f IN WAD SPACE

Consider a structural system acted upon by n load processes get) and having a

configuration described by deterministic parameters and m random variables~. The

probability of violation of one or more (known) limit states Gi(g. ~) = 0 (termed

"failure" in the sequel) in the period (0. t L ) where tL is some design life. is given

by the m-dimensional integral

(1)

where fX( ) is the known joint probability density function in X. The term Pf(~)

represents the structural failure probability conditional on X = x. Unfortunately.

in all but elementary problems. Pf(~) is difficult to obtain; for such problems also

the integration in (1) is of high order and approximate techniques must be used (see.
e.g .. Melchers. 1987).

Let the components of an n-vector g. termed the load capaci ty of the structure.

correspond in direction. but not magnitude. to the components of g. The vector R

describes. for fixed X =~. the critical limit state(s) for the structure. such that

when 9 >g the structure fails (see Figure 1). When 9 = g. 9 is on the limit state.

and therefore the ith limit state equation Gi = 0 yields immediately a functional

relationship between R randX=x

o V i (2)
263

B,

/ failure domain
9 >.B

realizations of
Gi(.!:>~) = 0

-------------1,_ ql
c=o

FIGURE 1: REALISATION OF ONE RAY AND ONE LIMIT STATE IN


(HYPER-) POLAR CO-ORDINATES

Using (2), it is possible to rewrite (1) as

(3)

where Pf(E) is the probability that 9 >g and f R( ) is the joint p.d.f. of the

structural resistance vector R. Clearly the dimension (n) of integration of (3) is

less than that of (1) - for which it is (m) - and in most real istic structural
reliability problems it is significantly less.

Consider now a hyper-polar co-ordinate system centred at some point C in the safe
domain. Apart from suggesting that the pOint be chosen to lie in the safe domain and
to expose as much of the (generally non-convex and piecewise) inner envelope of limit
state functions (for any given ~ = ~), there are no obvious criteria for selection.

The point g =~ is often convenient.

Let R - C = S • A where S (~ 0) is a random variable in mn , such that s o at C and


264

not necessarily independent of A. Hence for given A = a the vector R has its

components fixed in proportion. with S defining its magnitude. Accordingly. it is


now possible to define the probability density function fS( ) on the ray s. For a

given limit state function and given ~ = ~. Li(~' ~) =0 becomes Li(s·~ + ~. ~) =0


so that fS( ) is a function of the random variables ~ only.

It follows that (3) becomes

J fA(~) Js Pf(sl~)
unit
da (4)

sphere

For a given structural life (0 - t L ). the conditional failure probability Pf(sl~) can

be obtained from the well-known bound on the outcrossing rate v; of the vector

process get) out of the safe domain D: ~ Gi (g. ~) ~ O. Vi. Let individual
1

outcrossings be assumed independent Poisson events. as is reasonable for the


comparatively high reliability problem of interest. Then the failure probability due
to the load g outcrossing a fixed domain D at S =s along the ray A a is bounded

from above by (e.g .. Shinozuka. 1964; Veneziano et. al .. 1977)

(5)

in which Pf(O) denotes the failure prohability at time t = O. This quantity can be

readily obtained using well known methods for time-invariant reliability and will not
be further considered in detail.

Expression (5) may be simplified in various ways (e.g.. Melchers. 1987). The
simplest. for problems with rare outcrossings. is

+ (6)

-To evaluate (6). the mean outcrossing rate v; for the domain D may be expressed as

the surface integral of the local outcrossing rate


265

d(AS) (7)

where v + (!. t L ) is the local outcrossing rate through the elementary domain boundary

AS at point R = ! on sn. obtained from the well known generalised Rice formula. or

(8)

where fg( ) is the p.d.f. of the vector load process. and ~( ) is the outward unit

vector perpendicular to sn at g =!. Also get) is the (continuous) derivative of

get) assumed. as written. independent at any t of get).

Expression (8) may be written in polar co-ordinates. and in conditional form


v+(sl~. t L ) for any S = s. ~ = ~. as (with! = s 0 ~ + ~):

(9)

In general. the unit normal vector n = {nil will vary with S s; it can be obtained

directly as

{~ (10)

To obtain the mean outcrossing rate for use in (6). expression (9) must be modified
to allow for the orientation of the limit state surface at S =s

where the term s(n-l) da represents an elemental surface area perpendicular to ~ at s


and (aon( »-1 converts s(n-l) ~ to d(AS) representing the elemental surface area
equivalent to AS with normal ~( ). see Figure 2.
266

FIGURE 2: ELEMENTAL (HYPER-) SURFACE SEGMENT AS WITH


OUTWARD LIMIT NORMAL n

Noting that for given S =. s and ~ = ~, expression (6) becomes

(6a)

substitution of (10) into (6a) and then into (4) leaves

J fA(~)
-
J{Pf(£I~,
S
0) + tL E [~(£I~)·§( )]+ • f g(£) • s(n-l)
unit
sphere
(12)

Expression (12) may be rewritten in expectation forms suitable for Importance


Sampling (cf. Melchers, 1987; 1989; 1990)

fA(~)

} h:(~) ds (13)

and
267

(14)

where in (13) the samples are taken from hA(~)' an appropriately chosen importance

sampling p.d.f .• and in (14) correspondingly from hsIA(sl~) for the radial

direction s.

The above formulation requires the structural strength variation to be known along
any radial direction S =s for given A = a. For any individual limit state function

Li(g. ~) =0 and with given ~ = ~. fsl~ will need to be obtained. in general. by

multiple integration. However. for realistic structures modelled by random variable


strengths. dimensions etc .• it is generally the case that the dimension m of X is

large. Accordingly. the central limit theorem might be invoked to argue that S is
approximately Gaussian. that is. it can be represented by its first two moments.
This approximation would be expected to improve wi th increased dimension of m.
provided a corresponding increase occurs in the number of Xi contributing to anyone

limi t state function. From (2). for any given resistance vector !: I~. the random

variable S. given by g =S • A is then completely defined as a function of X

S S(~) I~ (15)
a

It then follows that as a first approximation the first two moments of S are given by
standard formulae for Gaussian variables (e.g .• Melchers. 1987). These require that
the limit state functions are explicit and differentiable. Once the moments are
known. the p.d. f. fS I~ ( ) can be immediately constructed. Some other approaches

applicable for non-differential limit state functions are considered in Melchers


(1989).

In general. there wi 11 be more than one limi t state equation Gi ( ) = O. and their

distributions may overlap. This more general situation can be readily incorporated
in the above procedure; the principles are illustrated in Figure 3.
268

Effective
FS I ~

radial direction

Effective
fs I~

FIGURE 3: EFFECTIVE C.D.F. AND P.D.F. IN RADIAL DIRECTION


WITH MULTIPLE LIMIT STATES

( 21 ir--1
1 1 r--T1 (1I1

FIGURE 4: FAILURE ME<lIANISMS FOR RIGID-PLASTIC


FRAME STRUCTURE
269

The frame structure shown in Figure 4 is subject to three stationary Gaussian load
processes get) with mean vector and covariance function matrix

· [~l
0.5 peT)
2
~(s. t) a peT) o
o p(2T)

where a/m = 0.5. T = s - t and p( ) is a correlation function (not specified in


detail herein). The frame is assumed to behave rigid-plastically with the following
collapse modes (limit state functions)

G1(t) 4mX - Ql(t)


G2 (t) 4mX - ~(t)

G3 (t) 6mX - ~(t)

G4 (t) 8mX - Ql(t) - ~(t)

where X is a random variable strength parameter assumed here Gaussian distributed


N(IJx· aX)·

It is easily shown that [~ • g(t)]+ in equation (8) is given by

1
.g;;

for time invariant limit states. Also. for Gaussian processes

T
so that. for example. for the first limit state equation. !! (1. O. 0) and

Cav(Qi' Qj ) = pes - t). so that

where
270

p"(O) (p(s - t»
with s - t =0 as required for i =j =1 corresponding to the only non-zero term. It
may be shown that the terms [ ]+ for the other three limit state expressions are

1/2 1/2
{ -p~O) } { -5/22;"(0) }

In the above. and for ~ • ~. analytical expressions were used for the components n i

of n. In the present example. with linear limit state functions. these expressions

are independent of X. the strength random variable. For more general situations.
some apprOXimation may be required to obtain n.

Some typical results extracted from Melchers (1989) are presented in Figure 5. The
resul ts for Pf(O) and u+ were obtained from the expectation forms (13) and (14)

respectively. In the simulation. hA was taken as uniform over the uni t sphere

centred at ~ and hsl~ was taken as a normal distribution with mean at the mean of

the cri tical limit state along s. given ~ = ~. The integration over s was done

numerically. Other examples. including application to problems wi th pulse loading


having mixed distributions. are given in Melchers (1989).

Rather than centre the hyper polar co-ordinate system on ~. other possibilities have

been described for the time invariant case by Ditlevsen et. al. (1989). In certain
situations such possibilities can significantly improve sampling efficiency.

In the present formulation it is advantageous to scale the 9 space such that uQ. are
1

not significantly different in magnitude - this is easily accomplished using a linear


transformation.

Finally it will be observed that (i) simulation is herein in the load process
space. and this is generally of much lower dimension than the space of 9 + ~

conventionally employed; (ii) the troublesome Rosenblatt transformation to standard


271

-2

-6 -6
p,(O)

-8
J -p"CO)
IAEAN VALUE ~X r.C:""'N""""'' l-UE'' '-'.'""X-' t -.
Jr--/:.0.4 ~0.8 lrr----60.4 ~0.8

10- 10 \-~~~~~~~~~~~----,l;;G-El~pO.;;,6",,'--<>=;==';;;.0d,f -10 10-10 'r-_~~~~~~~~~~--,-!=G--<l;==pO.;,,6",,'>-<>=;=.;';;;.0=\1"-10


0.00 0.01 0.02 0.03 0.04 O.O~ 0.06 0.07 0.08 0.09"~ 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 c~

0
log10 10 0 0109,0

~ ~
10- 2 -2

coefficient of var-iatlon (PfIO))

coefficient of var-I.tlon (v~1

I "'EAN VALUE
Ir--IJ. 0.4
IlX
)t---i( 0.8
G--£I 0.6 0---0 1.0
10- 6
0.00 0.01 00' 0.04 0.05 0.06 0.07 0.08 0.09 ,,1
X

FIGURE 5: (a) OUTCROSSING RATE v; and (b) INITIAL FAILURE

PROBABILITY Pf{O) AND THEIR COEFFICIENTS OF VARIATION (c. d)

Gaussian space is not required to be performed; (iii) simulation for Pf{O) and for

v+ can be carried out concurrently. offering further numerical savings; and (iv)
the use of the (hyper) polar co-ordinate system obviates many of the difficul ties
occasioned by sampling in cartesian co-ordinates: in particular. points of maximum
likelihood. or so-called "design" or "checking" points need not be identified.

An outline has been given for the formulation in the load process space of the time
dependent structural reliability problem. This has as main advantage the
considerably reduced directional Monte-carlo sampling required. An example problem
was given.
272

DITLEVSEN. 0 .• MELaIERS. R.E. and GLUVER. H.. General MultiDimensional Probabili ty


Integration by Directional Simulation. D.C.A.M.M. Report No. 392. Technical
University of Denmark. (to be published).

DITLEVSEN. 0.. BJERAGER. P.. OLSEN. R. and HASOFER. A. M.• (1988). Directional
Simulation in Gaussian Processes. Prob. Engrg. Mech .• Vol. 3. No.4. pp. 207-217.

LIN. T.S. and OOROnS. R.B.. (1985). Reliability of Ductile Systems with Random
Strengths. J. Struct. Engrg.. A.S.C.E.. Vol. 111. No.6. pp. 1306-1325.

MADSEN. H.O. and ZADEH. M.. (1987). Reliability of Plates Under Combined Loading.
Proc. Marine Struct. ReI. Symp .• S.N.A.M.E .• Arlington. Virginia. pp. 185-191.

MELCHERS. R. E .•(1987) • Structural Reliability: Analysis and Prediction.


Ellis Horwood/J. Wiley.

MEL<lIERS. R.E.. (1989). Load Space Formulation for Time Dependent Structural
Reliability. Research Report No. 044.11.1989. Department of Civil Engineering and
Surveying. The University of Newcastle.

MEL<lIERS. R.E.. (1990). Radial Importance Sampling for Structural Reliability.


J. Engrg. Mech.. A.S.C.E.. January. Vol. 115. pp. 189-203.

SCHWARZ. R.F .• (1980). Doctoral Thesis. Technical University. Munich.

VENEZIANO. D.• GRIGORIU. M. and OORNELL. C.A.. (1977). Vector Process Models for
Structural ReHabi 11 ty. J. Engg. Mech. Divn.. A.S.C.E.. Vol. 103. No. EM3.
pp. 441-446.

WEN. Y.K. and CHEN. H.C .• (1987). On Fast Integration for Time-Invariant Structural
Reliability. Prob. Engrg. Mech .• Vol. 2. No.3. pp. 156-162.

WEN. Y.K. and CHEN. H.C .• (1989). System Reliability Under Time Varying Loads. I.
J. Engrg. Mech.. A.S.C.E.. Vol. 115. No.4. pp. 808-823.
SOME STUDIES ON
AUTOMATIC GENERATION OF STRUCTURAL FAILURE MODES

Yoshisada Murotsu*, Shaowen Shao* & Ser Tong Quek**


*Department of Aeronautical Engineering
University of Osaka Prefecture, Sakai, Osaka 591, Japan
**Department of Civil Engineering, National University of Singapore
10 Kent Ridge Crescent, Singapore

ABSTRACT.

This paper is concerned with the following two subjects: (l)The


depth-first and the width-first branching rules are compared for
selecting probabilistically significant failure paths in a redundant
structure. (2)Some discrepancies between elasto-plastic and rigid-
plastic structural models are discussed for generating failure mode
equations.

1. INTRODUCTION

For the calculation of structural systems reliability, failure


modes are defined as collapse mechanisms which may form in a struc-
ture, and the structural failure probability is given by the probabil-
ity of the union of all the failure modes. Since there are too many
failure modes for large-scale structures with high redundancy, some
methods are proposed to select the probabilistically dominant failure
paths which finally give the dominant failure modes. An efficient
method for the selection is a branch-and-bound method, where an impor-
tant problem is how to make branching and bounding operations so as to
effectively find dominant failure paths and to discard the others. In
this paper, two branching rules, i.e., the depth-first and width-first
branching rules are proposed and compared through a numerical example.
The generation of the failure mode equations should be based on a
sound mechanics model. Elasto-plastic behavior is usually assumed,
274

and in some cases failure mode equations generated in this way are de-
pendent on the failure paths. This is not consistent with rigid-
plastic analysis. An attempt is also made in this paper to explain
such a discrepancy.

2. SELECTION OF DOMINANT FAILURE PATHS

Consider a structural system with n elements. Elements are assumed to


fail one by one up to some specific number pq until structural failure
results. The sequence of those elements to yield structural failure
is symbolically denoted as rl. r2 ....• r p . . . . . and r pq . which is
called a complete failure path. The set of the failed elements to
yield structural failure is called a failure mode. On the other hand.
the sequence of the failed elements which do not yield structural
failure. e.g .• the failure path rl~ r2~ ... ~ rp(p<pq) is called a

partial failure path. The probability P~:~q) of failure path rl~ r2~
... ~ r p is calculated as

p
p(p) =p[nF(~) 1
fp (q) i=l n (q) (1)

where
,,'-
Fr~(q)
)
is the failure event that element r~ fails at the i-th
(.1..) (1)
order of sequence. i.e .• Fr~(q) =(Zr~(q)~ 0). Superscript p denotes
the length of the failure path and q is used to denote a particular
(p)
failure path. When p<pq. pfP(q) is the probability of a partial
failure path while it is the probability of a complete failure path
for p=pq.
(p)
The failure probability pfP(q) is estimated by evaluating its
(p) (p)
lower and upper bounds. PfP(q)(L) and PfP(q)(U). For example. these
bounds are given by the following formulas [1]:

p(p) < p(p) = p[h F(U 1 < p(p)


fp (q) (L) = fp (q) i=l ri (q) = fp (q) (0) (2)
275

pep) min p [p(l) np(j) ] (3)


fp(q) (0) jd2, •.• ,p} rl (q) rj (q)

pcp) min P[p(j)] (4 )


fp(q) (U) je:{l,2, ••• ,p} rj (q)

pipP)(q)(L) =max{O'l-P[P~~)(q)]-~ min P[p(j) np(i) J} (5)


i=2 jdl,2, .•• ,i-I} rj (q) ri (q)

pep) = max {O,p[p(l) ]_ p[p(l) np(2)


fp(q) (L) rl(q) rl(q) r2(q)

i
j=3
min(p(j-l)
fp (q) (U) ,
P[p(l)
rl (q)
np(j)
rj (q)
])} (6)

Eq.(5) needs the safety margins at all the failure stages, while
Eq. (6) uses only the safety margins at the first and last stages and
the upper bound of the preceding failure path probabilities.
There are too many failure paths in a redundant structure to gen-
erate all of them, which necessitates a procedure for selecting only
the probabilistically significant failure paths. Efficient methods by
using a branch-and-bound technique have been proposed[2,1], and the
algorithmic procedure for the original version is given as follows:

the maximum of the lower bounds for the probabilities of


the selected complete failure paths
x the set of the failure paths to be selected for branching
the set of the selected complete failure paths
the set of the discarded failure paths
x .. a selected failure path
Xc a selected complete failure path
a bounding constant
a null set
Xo the artificial starting point of the failure path which
can proceed to anyone of the potential failure elements

step 1 (initializing)
Set PfPM=O, Xc=¢ , Xt =¢ , and X=xo.
Step 2 (partitioning)
1.Proceed one failure stage by adding each of all the potential
failure elements to the specified partial failure path. The
276

resulting failure paths are added to the set X of the failure


paths to be selected for branching.

2.Evaluate the upper bounds P~:;q)(U) for the probabilities of


the new failure paths.
Step 3 (branching)
1.Select the failure path Xs with maximum upper bound
probability among the newly partitioned failure paths.
2.Check the attainment of structural failure.
3.If structural failure is attained, go to step 4 for bounding
by adding the selected failure paths Xc=Xs to the set Xc of
the selected complete failure paths. If not, go to step 2
for further partitioning by specifying the selected failure
path as the failure path to be partitioned.
Step 4 (bounding)
(pq) .
1.Evaluate the lower bound PfP(q)(L) of the probabillty of
occurrence for the selected complete failure path.
2.Update the maximum PfPM of the lower-bound probabilities of
the selected complete failure paths by setting
(pq) (pq)
PfPM = PfP(q)(L) when P fPM < PfP(q)(L)

3.Discard the failure paths which have the probabilities of

failure smaller than 10- 7 ·PfPM. Add the discarded failure


paths to the set Xt of the discarded failure paths. Exclude
the discarded failure paths and the selected complete failure
path from the set X of the failure paths for branching.
Consequently, the set X of the failure paths to be selected
for branching is changed to

Step 5 (terminating)
If X=¢ , i.e., there are no failure paths left for branching,
the search is terminated. If not, go to step 3-2 by selecting
the failure path Xs with the maximum upper-bound failure prob-
ability from the set of the failure paths with the largest
path-length in the set X.

The selection of the branching path in step 3-1 and step 5 is


restricted to the set of the newly partitioned failure paths and that
of the failure paths with the largest path-length, respectively.
Thus, it is called a depth-first branching rule. A variation in the
selection rule is to extend the set of the failure paths for selection
277

Failure stage Failure stage


10I 1 1 I 1'-f9------'1t'----------r~~

Newly partitioned Newly partitioned


path path

(a) Depth-first branching rule. (b) Width-first branching rule.

Fig. 1 Branching rules

to the set of all the potential failure paths for branching, i.e., the
set X. This is called a width-first branching rule. Fig. 1 il-
lustrates the two branching rules.
In order to calculate the upper bounds of the probability of oc-
currence for the failure paths, either Eq. (3) or Eq. (4) is applied
while the lower bound is evaluated with either Eq.(5) or Eq.(6).
Next, a numerical example is given to find dominant failure paths
by using the depth-first and width-first branching rules, respec-
tively. Consider a simple frame as shown in Fig. 2. It has eight
potential failure elements. Fig. 3 illustrates all the structural
failure modes formed with those failure elements. The numerical data
are listed in Table 1.
Following the branch-and-bound procedures described above, ele-
ments are selected as failed elements one by one until a dominant com-
plete failure path with maximum lower-bound probability of occurrence
is found. Fig. 4 shows the search tree using the depth-first branch-
ing rule. There are 27 branching stages in all. The complete failure
paths are attained at the branching stages 5, 6, 9, 10, 11, 20, 21,
and 24. Their lower and upper bounds on probabilities of occurrence
278

~ 5m-----")>!E--\<- 5m -7\
Fig. 2. A simple frame struct ure

MMMM
(3,5,6 ) (2,5,7 ) (3,5,7 ) (2,5,6 )

17171717
(1,3,6 ,8) (1,2,7 ,8) (1,3,7 ,8) (1,2,6 ,8)

nn(1,5,6 ,8) (1,5,7 ,8)

Fig. 3. Struct ural failur e modes in the simple


frame struct ure
279

Table 1. Numerical data for the simple frame structure.

Element number 1,2,7,8 3,4,5,6

Cross sectional area A;.(m Z ) 3.60xlO-a 4.40xlO- a

Moment of inertia I;.(m4) 2. 58xlO-!5 3.70xl0-!5

Mean of reference strength R1.(kNm) 76.1 99.8

Young's modulus E(GPa) 210

Mean of yield stress a Y.i.(Mpa) 276

Mean of load (kN) L,,-=24.8, [z=40.0

Coefficient of variation CVR1.=0.05, CV L j=0.3

Coefficient of correlation p R1.Rj=O.O, P L.i.Lj=O.O

Bounding constant 7 0.0

Table 2. Selected dominant complete failure paths for the simple frame
structure

No. Failure Lower Upper Selected or not


path bound bound
Depth-first Width-first
branching branching

7 ..... 8 ..... 5 ..... 2 0.2423xlO- z 0.6278xlO-z YES NO


2 7 ..... 8 ..... 5 ..... 1 0.5702xlO- z 0.6153xl0-z YES NO
3 7 ..... 5 ..... 8 ..... 2 0.6061xlO- z 0.6278xlO-2 YES NO
4 7 ..... 5 ..... 8 ..... 1 0.6034xlO-z 0.6153xlO-z YES NO
5 7 ..... 5 ..... 2 0.6278xl0-z O. 6278xl 0-2 YES NO
6 5 ..... 7 ..... 8 ..... 2 0.6183xlO- z 0.6459xlO-2 YES YES-l
7 5 ..... 7 ..... 2 0.6458xlO-z 0.6459xlO-z YES YES-2
8 5 ..... 8 ..... 7 ..... 2 0.3858xlO-z 0.6459xlO-z YES YES-3

selected (2,5,7) (2,5,7)


failure modes (1,5,7,8)
280

Failure 2 :; 4
stage

.-.
/ ..!)

I
//~)
/

If ~~)
8

potential failure element

CD selected failure element

i branching stage

Fig. 4. Search tree of the depth-first branching rule


281

Failure 2 3 4
stage

/T)
/ I/g;
I
I ' ~­
I ! /,~)

1,1
/ /

1/
I I i
!

II /
/ I I
/ I I
III
jl.
1/ 1
/;I,1/
A
g!
'I
'/
ll
~
l--:.--'.!!)
1

CD potential failure element

(1) selected failure element

i branching stage

Fig.S. Search tree of the width-first branching rule


282

are shown in Table 2. It is seen that the failure path with the maxi-
mum lower-bound probability is No.7. Fig. 5 shows the search tree
using the width-first branching rule. The search necessitates 22
branching stages, which is shorter than the depth-first branching.
The complete failure paths are formed at stages 11, 14, and 19. The
three paths are the same as the last three ones obtained from the
depth-first branching as shown in Table 2.
Comparing the two branching rules, it is observed that the width-
first branching rule finds the dominant complete failure path with
largest probability of occurrence earlier than the depth-first branch-
ing rule. As the first selected complete failure path in the width-
first branching rule has quite large probability of occurrence, many
partial failure paths are discarded. In Table 2, the structural
failure modes selected in the depth-first branching are (1,5,7,8) and
(2,5,7), while in the width-first branching only (2,5,7) is selected.

3. STRUCTURAL MECHANICS

R R

Rpr-----------------

o o
(a) Elasto-plastic. (b) Rigid-plastic.

Fig. 6 Assumed member R-o behaviour.

An essential step in the computation of the probability of


failure of structural systems is the generation of the failure mode
equations. For large structures with high degree of redundancy, the
incremental load method is often employed [1,3-8]. It is inherent in
this approach that for ductile members, elasto-plastic force-
displacement relationship is assumed rather than a rigid-plastic
relationship, as shown in Fig. 6. The need to assume some relative
283

stiffness values is essential to obtain the intermediate steps leading


to the mechanisms although the final failure mode equation is indepen-
dent of these values. This is consistent with the theory of plas-
ticity. In most instances, however, some failure mode equations gen-
erated are not consistent with rigid-plastic analysis in the sense
that some of the terms imply negative work. The treatment of such
equations in the probability computations are often ambiguous and an
attempt here is made to explain and reconcile such a discrepancy.
Consider for example a simple truss shown in Fig. 7, with all the
failure paths and equations including the necessary constraints
depicted in Fig. 8. The failure mode equation for path 2+ to 1-,

namely R~-O.20R;-3.03S=O is strictly incorrect from the theory of


plastic analysis. From the elasto-plastic point of view, one can
consider load increments as in the following: by using the force-
displacement relationship of Fig. 6(a) with the appropriate capacities
for the members such that member 2 yields first, member 2 will in fact
continue to elongate whereas member 1 will be further shortened.
There is no obvious violation of structural mechanics rule and such a
failure mode may indeed be valid. In rigid-plastic theory, for the
mechanism involving member 1 and 2, member 3 will not have any elonga-
tion since it is not loaded to its yield capacity although it is in
tension. Hence, member 1 and 2 must fail in compression. In other
words, member 2 should not extend in the first place. Hence, the
equation obtained is invalid. The discrepancy can also be viewed in
terms of stresses due to the fact that the form of internal stress
distribution in elasto-plastic structures is different from that in
rigid-plastic structures[91. This implies that if the assumption of
rigid-plastic behavior is desirable and the generation of the equa-
tions is made by the use of the load incremental method, only the con-
sistent equations can be admitted in the probability computations.
The intermediate branch equations including the constraints should not
be used since rigid-plastic solution is strictly path-independent.
This is true if all the complete mechanisms are obtained in computing
the system failure probability. For the example shown, the relevant
set of equations needed to evaluate the system failure probability is

R~+O.20R~-3.03S=O (7a)

R~+O.25R;-3.78S=O (7b)

R;+1.24R;-3.78S=O (7c)
284

20 20 ., 1 0.1
I"

150

S(>O)

Fig. 7 A simple truss structure

R-+0.20R--3.03S<0
1 2 - 2
R--3.25S<0 (R~+R;-0.99R;<O)
1
(R~<2.92R2) R~+O.25R;-3.78S<0 _
n(Ri<1.51R;) 3
(Ri+R"2-0.99Rj>O)

Ri-0.20R~-3.03S<0
1
R;-l.llS<O (Ri-R~-O.99Rj<O)

(R~<0.34Ri)
R~+l.24Rj-3.78S<0 +
n(R~<O.52R~)
(Ri-R;-o. 99Rj>0) 23

R-+0.25R;-3.78S<0 +_
R;-2.16S<O (R~-R~-0.99R;<0) 31
(R;<O. 66R i) R;+1.24R;-3.78S<0 +
n(R;<1.94R~) (Ri- R;-0.99Rj>O) 3
Legend

N: failed member number.

s: + for tensile, - for compressive.

inequality constraints to ensure that

failure is in the sequence indicated.

Fig. 8 Failure paths and equations for the simple truss structure
285

REFERENCES

1. Thoft-Christensen, P. and Murotsu, Y., Application of Structural


Systems Reliability Theory, Springer Verlag (1986).
2. Murotsu, Y., Okada, H., Yonezawa, M., Grimmelt, M. and Taguchi, K.,
'Automatic Generation of Stochastically Dominant Modes of Struc-
tural Failure in Frame Structure,' Bulletin of University of Osaka
Prefecture, Series A .• Vol. 30 (1981). pp. 85-101.
3. Moses. F. and Rashedi. M. R .• 'The Application of System
Reliability to Structural Safety.' Proc. 4th Int. Conf. on Appl. of
Statistics and Probability in Soil and Structural Mechanics. eds.
Augusti. G. et al. Pitagora Editrice. Bologna (1983). pp. 573-584.
4. Murotsu. Y .. Okada, H .• Niwa. K. and Miwa. S .• 'Reliability
Analysis of Truss Structures by Using Matrix Method.' Transaction
of the ASME. Journal of Mechanical Design. Vol. 102 (1980).
pp. 749-756.
5. Murotsu, Y .• Okada. H .• Yonezawa. M. and Taguchi, K.• 'Reliability
Assessment of Redundant Structure.' Structure Safety and
Reliability. eds. Moan. T. and Shinozuka. M.• Elsevier. Amsterdam
(1981), pp. 315-329.
6. Murotsu. Y., Okada, H. and Matsuzaki. S .• 'Reliability Analysis of
Frame Structure under Combined Load Effects.' Structural Safety and
Reliability. eds. Konishi. I. et al. Vol. I, IASSAR (1985).
pp. 117-128.
7. Quek. S-T. and Ang. A. H-S .• 'Structural Reliability by the Method
of Stable Configuration.' Structural Research Series. No. 529
(1986). University of illinois. Urbana.
8. Watwood. V. B .• 'Mechanism Generation for Limit Analysis of
Frames.' J. Struc. Mech .• Proc. of the ASCE. Vol. 109. ST-1 (1979).
pp. 1-15.
9. Melchers. R. E .. Structural Reliability; Analysis and Prediction.
Ellis Horwood Limited (1987).
SENSITIVITY ANALYSIS FOR COMPOSITE STEEL GIRDER BRIDGES

Andrzej S. Nowak* & Sami W. Tabsh**


*Department of Civil Engineering
University of Michigan, Ann Arbor, MI 48109, USA
**Gannett Fleming, Inc., Harrisburg, PA 17105, USA

Abstract

Reliability analysis procedure is formulated for girder bridges. Reliability indices


are calculated for composite steel girders. System reliability is also calculated and
compared to girder reliability. The effect of correlation between the resistances of
individual girders is investigated. Sensitivity functions are developed relating girder
and system reliabilities with material properties. dimensions and load components.
The effect of a partial reduction of girder reliability on the system reliability is evalu-
ated.

Introduction

There is a growing need for efficient bridge evaluation procedures. In Michigan


steel girder bridges constitute over 60 percent of the total population. Many of them
show various signs of deterioration (corrosion. fatigue. mechanical damage). Deter-
ministic methods do not reveal the actual load carrying capacity and safety reserve.
Furthermore. load components and reSistance parameters are random variables.
Therefore. safety is a convenient measure of the structural performance. The objec-
tive of this paper is to present an approach to calculation of reliability for girder
bridges. The calculations are carried out for composite steel girders.

The statistical models for load and reSistance are based on the available traffic
surveys. tests data and analysis. Reliability is calculated for individual girders and
for bridges treated as structural systems. The main elements of the system are gird-
ers and a composite concrete slab.

Deteriorated girders reduce the reliability. The importance of various load and
reSistance parameters can be evaluated by sensitivity analysis. Sensitivity functions
.are developed for ,girders and brid,ges.
288

Bridge Load Models

The major load components considered in this study are dead load, D, live load,
L, and dynamic load, I. Statistical parameters for bridge loads were recently evalu-
ated in conjunction with the development of the Ontario Highway Bridge Design Code
(OHBDC) (Nowak et al. 1990) and LRFD (Load and Resistance Factor Design) for
AASHTO (Nowak et al. 1989).

Dead load is the weight of structural and nonstructural members. It was


observed that D is a normal random variable. The parameters of D depend on type of
load. For factory-made members, with good control of dimensions, the bias factor
(mean-to-nominal ratio) is 1.03 and the coefficient of variation, V, is 0.08. For cast-
in-place concrete (slab) the bias factor is 1.05 and V is 0.10. Thickness of asphalt is
featured with a considerable degree of variation. Based on the recent Ontario data
(Ministry of Transportation Ontario, yet unpublished), the mean thickness is 90 mm
with V = 0.15. The total dead load is conSidered as a sum of components.

Live load is the effect of trucks moving on the bridge. The major parameters
which are fonsidered include truck weights and axle configurations, traffic volume,
multiple presence (more than one truck simultaneously on the bridge), truck trans-
verse position and reference time period. The basis for derivation of the live load
model is truck survey data collected by the Ministry of Transportation Ontario (Agar-
wal and Wolkowicz 1976). The analysis was performed by Nowak and Hong (1990).

Survey truck data was used to develop distribution functions of moments for
various bridge spans. The results are plotted on normal probability paper in Fig. I,
for spans 3 to 30 m. The horizontal scale represents the calculated moment divided
by the design moment specified in the OHBDC (1983). Also shown are extrapolated
upper tails of the distributions. The surveyed trucks represented approximately a 2
week heavy traffic. The probability levels corresponding to 75 year life time, and
other shorter periods are shown in Fig. 1.

Maximum 75 year live load was derived by extrapolation of truck data and simu-
lations to model the multiple presence. The mean maximum moment was deter-
mined for a single lane and for two lane bridges. For a single lane and span up to
about 30-40 m, the load is governed by a single truck. For longer spans two trucks
produce the maximum moment. The results are shown in Fig. 2 for various time
periods. For two lanes, it was observed that two side-by-side trucks with fully corre-
lated loads govern. Each of the two trucks turned out to be the maximum monthly
truck. The maximum monthly moment is about 15 percent lower than the maximum
75 year moment.
289

6r---------------------------------------------~

50 Years 5 Years
5

1 Day

3
c
.2 3m
'5
.D 6m
~
iii 2
C 9m
co 12m
E
0
z
18m
~
co
"tl
C
co
a; 0
30m
"0
0
e
0
> ·1
.:

·2

·3r---~~--------------------------------------------~

4r-~--_,------~------~--~--~--T---._~--_,~~--~
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4

Moment Ratio

Fig. 1 Cumulative Distribution Functions of Moments Due to Surveyed


Trucks.
290

1.4r-------~------------------------------------ ___

1.2

1.0
~
Ii
a:
C 0.8
«>
E
0
:0
c
.
«>
:0 0.0
0 10 20 30 40 50 60

Span (m)

Fig. 2 Mean Maximum Moments for Various Time Periods.

0.10-,----------------------

Dynamic

c Static
o
:+=
(J
Q)
'+-
Q) 0.05
o
c
..
C
Q.
(/)

:2
~

0.00~~~--._--~_.------4_-----r_~--_.--~---

0.0 0.2 0.4 0.6 0.8 1.0 1.2


Time, sec
1 in = 25 nun
Fig. 3 Static and Dynamic Deflections.
291

Moving trucks often produce a considerable dynamic effect on bridges. An


example of the actual deflection as a function of time is shown in Fig. 3. It is conve-
nient to consider the dynamic load as an equivalent static load. In this study. I is
defined as a fraction of L. equal to the ratio of dynamic and static deflections (see Fig.
3). Dynamic behavior depends on three major factors: surface roughness. bridge
dynamics (period of vibrations) and vehicle dynamics (suspension system). It is very
difficult to determine the actual contribution of these factors. The data base for
statistical parameters of I is very limited. The present model is based on the Ontario
bridge tests (Billing 1984) and analytical Simulations by Hwang and Nowak (1990a
and 1990b).

The test results indicated that the mean I (as a fraction of the mean L) is
0.09-0.17. However. most of the larger values of I correspond to light trucks and very
little data is available for heavy trucks. Therefore. a numerical procedure was devel-
oped for simulation of dynamic interaction between bridge. truck and surface. It was
observed. that dynamic load is decreased for increased truck weights.

The maximum value of live load is caused by two trucks side-by-side. Dynamic
load resulting from two trucks is smaller than for one truck by about 20-25 percent.
In further calculations. the dynamic load applied to the mean maximum 75 year
moment is 0.13 m L for a single truck and 0.09 m L for two trucks. where m L = mean
live load. The coefficient of variation of I is 0.8.

Bridge Resistance Models

Resistance is considered as a function of three factors: material properties


(strength). fabrication factor (dimensions) and professional factor (analysis). The
statistical parameters (bias factor and coefficient of variation) for bridge girders are
derived from test data and special simulations (Tantawi 1986; Tabsh 1989; Tabsh
and Nowak 1990). Bias factor and coefficient of variation of the system resistance are
calculated using special sampling formulas (Zhou and Nowak 1988).

Girder resistance is modeled using a special simulation procedure. The unit


consisting of a girder and effective width of slab are idealized as shown in Fig. 4.
Material properties are modeled using the available statistical data. Crushing of
concrete is the dominant failure mode. Moment-curvature relationship is derived by
an extensive Monte Carlo simulation. Typical results are presented in Fig. 5. For
each section size. three curves are shown: the middle one is the mean and the others
are one standard deviation above and below the mean. The mean-to-nominal ulti-
292

I I

Actual Section Idealized Section

Fig. 4 Idealization of the Girder Section.

~.-------------------------,
W33x130 1800
3000

..,
W24x76
... 2SOO
1500

.Io! 2000

...c: 1500 _ _ _ an

•eo 1000
_aft
_ ...... t. deV. ~ . .aft-st. dey.
:c _ _ ...+st. dey.
de.,.
__
500 _+.t.

O~~---r----~----~ ~--~ O~~~-r-r-r-,~~~--r-~


0.0000 0.0002 0.0003 0.0005 0.0002
0.000.. 0.0000 0.0004 0.0006 O.OOOl
O:u:Yature (Rad../1.a..1 O:u:Yature (Rad../iA.,

4800 r-------------------~~._--------------------~
7000

....,
-; 4000
6000
3200 5000

-- -...
~

--- _...
W36x210 4000 W36x300
...c: 2400
3000
•a0 1600
2000 _&ft-at. dey.
:c . . .n-st. dey. ~
de.,.
~
800 _ _ _... +st. de.,. _ _ . . .&ft+.t.
1000
0 O~--r--.--~~__~--~~--~
0.0000 0.0001 0.0002 0.0003 0.0004 0.0000 0.oa01 0.0002 0.0003 0.0004
Cw:vataze (Ra4./iA.' Cw:vat~ (Ra4./ill.,

1 in = 25 rom; 1 .kip-ft = 1. 356 kN-m

Fig. 5 Moment vs. Curvature for Considered Girders.


293

mate moment is 1.05-1.06, and the coefficient ofvartation is V =0.10-0.105. For the
yield moment the mean-to-nom1nalis 1.01-1.03, with V =0.11.

Bridge resistance is defined In terms of the ultimate truck weight. The numeri-
cal value depends on truck configuration (axle spacing, axle load ratio), truck position
Uongitudinal and transverse) and multiple-presence (number of trucks on the bridge).
The most common trucks on American highways are single vehicles (3 axle trucks)
and semi-trailers (5 axle trucks), as shown In Fig. 6. Various combinations of these
two vehicles are considered to determine the critical load. For each combination, axle
loads are increased gradually until the deformations exceed acceptable limits. The
deformations are measured in terms of the maximum grider deflection. For 18 m
span bridge, the resulting load-deflection curves are plotted In Fig. 7 for a single
truck and a semi-trailer. The first plastic hinge is formed in one of the girders at
about 65 percent of the ultimate load.

ReliabWty Analysis

Reliability indices are calculated for girders and bridge systems using Rackwitz
and Fiessler procedure (1978). Umit state fi,mction is

g=R-Q (1)

where R = reSistance and Q =total load effect. Q is treated as a normal variable, and
R as lognormal.

Reliability Index, /3, is defined as a function of the probability of failure, PF ,

(2)

where <l»-1 = Inverse standard normal distribution function.

Two major North American bridge design codes are considered, AASHTO Specifi-
cations (1989) and Ontario Highway Bridge Design Code (OHBDC 1983). AASHTO
design equation is,
1.3 D + 2.17 (L + n < q, R (3)

where cp = resistance factor (1.0 for steel). OHBDC (1983) load and resistance factors
are,

1.1 Dl + 1.2 D2. + 1.5 D3 + 1.4 (L + I) < cp R (4)


294

20% 40% (each) 11.1 % 22.2% (each) 22.2% (each)

(a): Single Unit Truck (b) Semi-Trailer Truck


l' = 25 rom

Fig. 6 Truck Configurations Used in the Analysis.

800

600
'Vi'
Po.
;g
"g 400
.9
..>0:
u

-
~ 200 ----m-- Single Truck
Semi-trailer
1 kip 0.448 kN
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2

Maximum Bridge Deflection (ft)


1 ft 0.305 m
Fig. 7 Truck Load vs. Maximum Bridge Deflection.

6 r-------------------------~

5
OHBDC (1983)
>C

~9)
.: 4
c

~ 3

..
:0
.!! 2
a:

OL-~L- __~~~~~~~~~~~
o 10 20 30 40 50 60 70
Span (m)

Fig. 8 Reliability Indices for Composite Steel Girders Designed


Using AASHTO (1989) and OHBDC (1983).
295

where D1 = weight of factory-made members (steel); D2 = weight of cast-in-place


members (concrete slab); D3 = weight of asphalt. Resistance factor is 0.90 for steel.
Design live load and dynamic load are different in AASHTO (1989) and OHBDC
(1983). The calculated reliability indices for bridge girders designed using the two
codes are shown in Fig. 8 for AASHTO (1989) and OHBDC (1983).

System reliability indices are calculated for a typical bridge with five girders,
spaced at 2.4 m. Spans from 12 to 30 m are conSidered. System reliability is
strongly related to the girder reliability level. The relationship between the girder
reliability and system reliability indices is shown in Fig. 9. The effect of correlation
between resistances of girders is also conSidered. Two extreme cases are full correla-
tion and no correlation. For 18 m span bridge, the results are plotted in Fig. 10.

Sensitivity Analysis

Sensitivity functions relate the reliability index with realization of selected par-
ameters, such as material properties, dimensions or load components. Parameters
conSidered include steel yield stress, Fy' plastic section modulus, Z (compact sections
are conSidered), slab concrete strength, fc', slab thickness, t s ' slab effective width, b,
dead load, D, live load, L, and dynamic load, 1. In the sensitivity analysis for the
bridge, reliability of a girder, or a group of girders, is also considered as a parameter.

The results of analysis performed for composite girders are shown in Fig. 11 and
12, for 12 and 30 m spans, respectively. The most important parameters are Fy and
Z, while parameters related to concrete slab have a limited effect on reliability.

Sensitivity analysis for the system is demonstrated on 18 m span bridge with


five W36x135 girders, spaced at 2.4 m, as shown in Fig. 13. The girder reliability
index is 3.80. Two extreme degrees of correlation between the girders are conSidered;
full correlation and no correlation. The corresponding sensitivity functions for mate-
rial, dimension and load parameters are shown in Fig. 14 and 15 for uncorrelated
and correlated cases, respectively. As in the case of girder analysis, the most impor-
tant are the parameters related to structural steel, Fy and Z. Relatively unimportant
parameters are related to concrete slab. live load is more important than D and I.
The effect of deterioration (reduction) of the girder reliability on the system
reliability is shown in Fig. 16 and 17. In Fig. 16, reduced reliability indices are con-
sidered for individual girders. In Fig. 17, a simultaneous reduction of the reliability
index in two or three girders is considered.
296
10
j
8 , III

• o·
~
., .-
III
CIS
~ 6
e<:: III

Y 4 III
'"» III Span =40 ft.
~

e· • Span =60 ft .
:g~ 2
• • Span =80ft.
Span = l00fL
.
~ 0

0
0 2 3 4 5 6 7

Girder Reliability Index

1 ft = 0.305 m
Fig. 9 System Reliability vs. Girder Reliability for Various Spans.

9
~ 8 p=O
] 7 P =1
e-
S 6
~
:= 5
~
4
Y
~
3
~
2
~
:g p = Coefficient of Correlation.

~
0
0 2 3 4 5 6 7

Girder Reliability Index

Fig. 10 System Reliability vs. Girder Reliability for Full


Correlation and No Correlation Between Girders.
297

1.0
-g
fc b

j~ 0.8
I

:5;
D
IS
0:= 0.6
-.~
L
1~ 0.4
~~
2-e0
o·~ 0.2
Z

Fy
0
·0
~ 0.0
0 10 20 30 40 50
Percent Change from Nominal

Fig. 11 Sensitivity Functions for the Girder, Span 12 ill.

1.0

1 0.8
j~
c..s 0.6
::::>~
o·~
-:;::l 0.4
1~
~~

0.2

'a] 0.0
0 0
·0
~ ·0.2
0 10 20 30 40 50
Percent Change from Nominal

Fig. 12 Sensitivity Functions for the Girder, Span 30 ill.


298

I· 34 ft. .j
~ ~
right lane left lane

ICD I0 I0 I0 I0
1 ft. 4 @ 8 ft. = 32ft. I ft.

1 ft 0.305 m
Fig. 13 Cross Section of the Considered Bridge.

~ 6

.g 5
==
~ 4
~
e
£til
3

....
en 2
~
:g
a:l
0
0 10 20 30 40 50

Percent Change from Nominal

Fig. 14 Sensitivity Functions for the System, No Correlation


Between the Girders.

Percent Change from Nominal

Fig. 15 Sensitivity Functions for the System, Full Correlation


Between the Girders.
299

~~------------------------------,
_ Girder 1 or 5

o 10 20 30 40 50 60 70 80

Percent Decrease in Element Reliability Index

Fig. 16 Reduced Girder Reliability (1 Girder) vs. System


Reliability.

e
£
'">OK
CJ:lo
cOO
.~ c
40

35

30

25
--- Girders 2. 3 & 4
Girders 1 & 2
Girders 1 & 5

0-

~g .~
20
~~
o:=:: 15
c:~ 10

~ 5

0
0 10 20 30 40 50 60 70 80

Percent Decrease in Element Reliability Index

Fig. 17 Reduced Girder Reliability (2 or 3 Girders) vs. System


Reliabili ty.
300

The results indicate that exterior girders are more important from the system
reliability point of view. System reliability is not very sensitive to the reliability of an
individual girder. A 70 percent reduction in grider reliability causes a 15 drop in the
system reliability. However, if two or more adjacent girders are subjected to a loss of
reliability, then the system relibility is considerably reduced.
Conclusions

Reliability analysis procedure is formulated for girder bridges. Statistical models


are summarized for load and resistance parameters. Reliability indices are calculated
for girders and bridge systems. Sensitivity functions are developed for load and
reSistance parameters in composite steel girder bridges.

System reliability is higher than girder reliability, in particular in case of no


correlation between girder resistances. The difference between girder reliability and
system reliability can be conSidered as a measure of bridge redundancy.

Sensitivity functions point to the importance of parameters related to steel


(strength and dimensions). Parameters related to concrete slab are less important.
Partial failure of an individual girder causes only a limited decrease of the system
reliability level. However. deterioration of two or three adjacent girders may be criti-
cal.

Acknowledgements

The presented research was partially supported by the National Science Founda-
tion, under Grant No. MSME-8715496, with Program Director Kenneth Chong, which
is gratefully acknowledged.

References

AASfITO, 1989, "Standard Specification for Highway Bridges". American Association


of State Highway and Transportation Officials (AASIITO), Washington, DC.

Agarwal, A. C. and Wolkowicz, M., 1976, "Interim Report on 1975 Commercial Vehicle
Survey" Research and Development Division, Ministry of Transportation and
Communications, Downsview, Ontario.
Billing. J. R., 1984. "Dynamic Loading and Testing of Bridges in Ontario," Canadian
Journal of Civil Engineering, Vol. 11, No.4, December, pp. 833-843.
301

Hwang, E-S. and Nowak, AS., 1990, "Dynamic Analysis of Girder Bridges", Trans-
portation Research Record No. 1223, Washington, DC., pp. 85-92.
Hwang, E-S. and Nowak, AS., 1990, "Simulation of Dynamic Load for Bridges",
ASCE, Journal of Structural Engineering, submitted.
Nowak, A S., Hong, Y-K. and Hwang, E-S., 1990, "Calculation of Load and Resistance
Factors for OHBDC 1990", Report UMCE 90-06, Department of Civil Engineer-
ing, University of Michigan, Ann Arbor, MI.
Nowak, A S. and Hong, Y-K., 1990, "Bridge Live Load Models," ASCE, Journal of
Structural Engineering, Submitted.
Nowak, A S., Hong, Y-K, Tabsh, S. W., Hwang, E-S., Abi-Nassif, H. and Ting, S-C.,
1989, "Calibration Task Group," Report UMCE 89-17, Department of Civil Engi-
neering, University of Michigan, Ann Arbor, MI.
OHBDC, 1983, "Ontario Highway Bridge Design Code", Ministry of Transportation
Ontario, Downsview, Ontario, Canada.
Rackwitz, R. and Fiessler, B., 1978, "Structural Reliability Under Combined Random
Load Sequences", Computer and Structures, No.9, pp. 484-494.
Tabsh, S. W. and Nowak, AS., 1990, "Reliability of Highway Bridges," ASCE, Journal
of Structural Engineering, tentatively accepted.
Tabsh, S. W., 1989, "Reliability-Based Sensitivity Analysis of Girder Bridges", Ph.D.
Thesis, Department of Civil Engineering, University of Michigan, Ann Arbor, MI.
Tantawi, H. M., 1986, "Ultimate Strength of Highway Girder Bridges", Ph.D. Thesis,
Department of Civil Engineering, The University of Michigan, Ann Arbor, MI.
Zhou, J-H. and Nowak, AS., 1988, "Integration Formulas to Evaluate Functions of
Random Variables", Structural Safety Journal, No.5, pp. 267-284.
LONG-TERM RELIABILITY OF A JACKUP-PLATFORM FOUNDATION

Knut O. Ronold
Det norske Veritas, P. O. Box 300, N-1322 Hf6vik, Norway

ABSTRACT
A probabilistic model for analysis of the foundation stability of a jack-up platform foundation is
presented. A jack-up platform with three independent legs supported by individual spudcan footings
is considered as an example, and a two-dimensional representation of this platform is adopted. Con-
ventional bearing capacity failure as well as horizontal sliding of one of the spudcan footings are
considered. Uncertainties in wave and wind loading as well as in soil strength properties are
included. The long-term reliability of the foundation is assessed by means of a nested application of
a first-order reliability method.

I. INTRODUCTION
Jack-up platforms are used offshore, mainly for short-term commissions in connection with
exploration and drilling for oil. Recently, also long-term applications of this mobile platform type
have become attractive as an alternative to use of fixed and more permanent platform types, for
example for oil production purposes on marginal fields.
A jack-up platform consists of a hull supported by 3 or 4 legs which are jacked down to touch
the seafloor when the platform is installed on an offshore site. Each platform leg is usually equipped
with a footing which forms a foundation for transfer of the platform forces to the foundation soils.
The footings may be so-called spudcans with a conical shape. During installation the spudcans will
penetrate the soil under the selfweight of the platform in combination with some temporarily applied
vertical preload. The final penetration depth of the spudcans and the corresponding final contact
area between the spudcan and the soil will be governed by equilibrium between the applied vertical
forces and the bearing capacity of the foundation soils.
During a storm the platform is subjected to wave forces and possibly current forces on the legs,
and wind forces on the hull. In the following, emphasis is given to the wave and wind loading.
Foundation failure under one or more platform footings is one of the most frequent causes for
jack-up platform accidents with consequences ranging from limited structural damage to capsize
and total platform loss. The environmental properties as well as the soil properties governing such a
failure are encumbered with uncertainties. A probabilistic approach to the problem is therefore
adopted, based on available analysis methods. The approach is described in this report, and its
practical application is demonstrated by presentation of an example case.
Emphasis is laid on presenting an illustrative example for application of reliability methods to
analysis of a jack-up platform foundation. Some simplifications are therefore made, first of all by
limiting the number of variables which are modeled as stochastic variables. Only the variables
which are expected to be most important, namely those pertaining to the wave and wind loading
and those pertaining to the soil, are modeled as stochastic variables. An example with all uncertain
variables modeled as stochastic variables would have become very comprehensive, and the computa-
tions would have become very time-consuming.

2. EXAMPLE PLATFORM
A typical 3-1egged jack-up platform is considered. The platform is presented in Figure 1 with
major geometrical data marked out. The example platform is founded on a sand in 70 meters of
water in the North Sea.

3. ENVnWNMENTAL CONDITIONS
A storm is assumed to consist of a stationary sea state of duration T Btorm =6 hours with
significant wave height Hs and corresponding mean zero-upcrossing period T z . The distribution of
the significant wave height Hs in a sea state of 6 hours duration is assumed to be a Weibull
304

DIRECTION OF
BARGE 0( LOADING

16 m

FRONT AFT
LEG LEGS

70 m

1/='1'"=

Figure 1 Prqjection of example platform in vertical plane of loading

distribution, see Bitner-Gregersen and Haver (1989). According to this assumption the cumulative
distribution function for Hs is

(1)

where a=1.498, ,8=1.146 and -;-=0.679 for the considered North Sea location.
The significant wave height Hs in an arhitrary 6 hour sea state can then be expressed in
terms of a standardized normally distributed variable U 1

1» y-
1

Hs = 'Y+a (-In(I-~U (2)


where If? denotes the standardized normal distribution function.
The significant wave height Hs ""'" in the most severe 6 hour sea state during a period of
operation D -I year can similarly be ~ressed in terms of a standardized normally distributed vari-
able U 1'

(3)

where N =D IT. form denotes the total number of sea states during the operation of the platform at
the considered location. This is based on the assumption that the significant wave heights in the N
sea states are mutually independent.
The mean zero-upcrossing period T z is conditional on the significant wave height Hs. Accord-
ing to Bitner-Gregersen and Haver (1989), the distribution of T z conditional on Hs can be taken as
a lognormal distribution. The mean zero-upcrossing period can then be expressed in terms of a
standardized normally distributed variable U 2 as follows,
T z = exp(U 20"+P) (4)
where
P = 0.933-tO.578Hj·395 (5)
and
0' = 0.055-tO.336exp( -O.585Hs ) (6)
305

for the location in question. Hs is in meters and Tz is in seconds.


Loog-crested waves &om one direction are ooosidered, and the Pierson-Moskowitz spectral den-
sity is assumed for the wave energy
S (w) = ~Hl(~)4w-llexp(-.!.(~)4w"'), W>O (7)
q 41r Tz 1r Tz

where w is the angular frequency.


The variance of the sea elevation due to wave action and the average period of the waves are
expressed in terms of the zero'th and second order moments of the spectral density.
The sea elevation process is assumed to be Gaussian and nalTOw-banded. By crossing theory
for random processes it can be shown that the cumulative distribution function of the maximum sea
elevation in the storm can be estimated as
(8)
where '1max is the maximum sea elevation, and To",,.,. is the duration of the storm which is charac-
terized by the significant wave height H s , see Madsen et aL (1986). Vq is the average upcrossing
rate for the sea elevation and is expressed as

vq('1) = ~
21r (~rexp
Ao (-~!Ll
2 Ao (9)

for a Gaussian process. Hence, the cumulative distribution function for the maximum sea elevation

[J;;- [~ r""'ht lJ
becomes

F,_(,) ~P( ....91)· ... (10)

Under the Gaussian assumption, the maximum wave height H max equals two times the maximum

r 1r
sea elevation and can now be expressed as

H max = 2 (-2Aoln (- T.~:' (~ ln4>(U 3) (11)

where U 3 is a standardized normally distributed variable which expresses the inherent uncertainty
of the maximum wave height.
The corresponding wave period T is assumed to follow a Longuet-Higgins distribution, see
Longuet-Higgins (1983), and can be expressed in terms of a standardized normally distributed vari-
able U 4 as follows

(12)

where
If- = >'2Ao_1 (13)
>.l

r
is a spectral bandwidth measure and where

C = (~ Hmax (14)

is a normalized maximum wave height. Reference is made to Holm et a1. (1988).


Sinusoidal waves are assumed. Once the maximum wave height Hmax and the corresponding
wave period T are known, the wave forces on the platform legs are calculated according to
Morison's equation,

(15)
306

where F is the total horizontal force on a cylindrical vertical leg with diameter D., d is the water
depth, tJ is the sea elevation above mean sea water level, u is the water particle velocity due to the
wave, and is is the corresponding water particle acceleration. CD is a drag coefficient, and C1 is an
inertia coefficient. p is the unit weight of sea water.
It is noticed that F depends on the position of the wave relative to the platform legs. It is also
noticed that F is a static force, i.e. no dynamic amplification is included. This may be a somewhat
unconservative approximation, especially for a platform in 70 m of water as in the present case.
Current forces on the platform legs are taken into account according to Morison's equation
based on a current velocity
1
v =Vl(~)7+V2(~) (16)

where z is the distance above the sea floor, d is the total water depth, v 1 is the tidal current velo-
city at the still water level, and V2 is the wind generated current velocity at the still water level.
Deterministic values v 1- 0.85 ml s and v 2- 0.73 ml s are used in this analysis. Other current
models are available in Bitner-Gregersen and Haver (1989).
Wind forces on the hull are calculated according to a formula
(17)
where AhuU is the area exposed to wind and u is the averaged sustained wind speed acting on the
hull and calculated as
U = U 10 v'0.93-tO.007z (18)
where u 10 is the wind speed at 10 m height and z is the average height of the hull above sea level.
The wind speed u 10 at 10 m height is correlated with the significant wave height Hs. The dis-
tribution of the wind speed at 10 m height conditioned on the significant wave height Hs can be

r)
represented by a Weibull distribution,

FUIOIHs(UlO IHs=hs ) = l--eXP(-( u;o (19)

where rand s are functions of hs, see Bitner-Gregersen and Haver (1989).
Based on this, U 10 can be expressed in terms of a standardized normally distributed variable
U 5 as follows

UI0 = s (-In(l-<PCU 5» r
1
(20)

where
r = 2.424-tO.233Hs1.12 (21)
s = 4.40+1.94Hs (22)
for the present location. The exposed wind area is AhuU =400 m 2, and its average height above sea
level is taken as z =28 m. Wind gust is not included.
Wind and current forces are assumed to act colinearly with the wave forces, and all forces in
the considered example are assumed to act in a vertical plane from the platform aft towards the
platform front. This representation of the loading is adopted to suit a two-dimensional frame model
for the jack-up structure, see below. This implies that the directionality of the loading is not
accounted for in this study. This is believed to be conservative.

4. SOll..-STRUCTURE INTERACTION AND FOUNDATION STABll..ITY


The platform forces are transferred through the spudcan footings to the supporting soils, in this
case a sand. A situation is considered where the loading is directed from the platform aft so that
the two aft footings will experience a vertical unloading whereas the single front footing will experi-
ence a vertical load increase. With respect to the foundation stability for the platform, two failure
modes are considered to be critical, namely a conventional bearing capacity failure under the single
front footing, and a horizontal sliding of one or both of the two aft footings.
307

The horizontal capacity of a spudcan footing may be found according to friction considerations
and will be proportional to the vertical force.
The vertical capacity of a spudcan footing will be dependent on the horizontal force and may be
calculated according to Brinch Hansen's formula for bearing capacity
Qv = (-}1BN,s,d,i,-i1Io'Nq sq dq iq )A (23)
where B is the diameter of the contact area A between the spudcan and the soil, and 1 is the sub-
merged weight of the soil. 1=10 kN 1m 3 is used in this analysis. s, d and i denote shape factors,
embedment depth factors and load inclination factors, respectively, see Det norske Veritas (1977).
Po' is the effective overburden pressure. The contact area A results from the vertical loading applied
during installation. This loading consists of the submerged selfweight of the platform, in this case
W=109 MN, and a temporary preload, Wp -120 MN.
N q and N, are bearing capacity factors which depend on the angle of friction I/> for the sand:
N = exp(7rtg I/»tg 2( !!....+1-) (24)
q 4 2
N') = 1.5(Nq -1)tg I/> (25)

The angle of friction I/> depends on the relative density Dr of the sand, and a model
I/> = a l-ta~r -l-f (26)
is chosen. A regression analysis of 14 observations of pairs (Dr, 1/» from drained triaxial tests on
sand specinlens in the laboratory yields the following statistical properties for a 10 a 2 and (,

E
1
r
laa2 1 [19.43]
25.76 D r
laa2 1 [1.593]
1
2.088 Pa=-O.9742 a,=1.35 (27)

I/> is given in degrees. The distributions of ft=(a1o a2)T


and £ are used in the analysis. A two-
dimensional normal distribution is assumed for ft, and a normal distribution with zero mean is used
for £.
It is noticed that the applied friction angle I/> is a local friction angle for the sand under the con-
sidered footing, since the local variations of ¢ from point to point within the sand are included in
terms of the term ( which comes in addition to the average of I/> as given by al and a2. This
representation of I/> may be somewhat on the conservative side. A more correct, but also more com-
plex representation would be to model the friction angle I/> as a random field, see Ronold (1990), in
which case the local variations of ¢ from point to point to some degree would average out over the
spudcan area.
The relative density is assumed to follow a normal distribution with mean value E[Dr ]=O.7
and standard deviation D [Dr ]=0.1.
The moment capacity of a footing depends on the vertical force relative to the vertical capacity
and can be found by an evaluation of the maximum allowable eccentricity of the vertical force.
In conventional deterministic terminology the stability of a spudcan footing can be expressed in
terms of a factor of safety, either with respect to horizontal sliding or with respect to a deeper bear-
ing capacity failure. For a bearing capacity type of failure the conventional factor of safety can be
calculated as

(28)

where Fv is the actual vertical force and Qv is the available vertical capacity for the considered
footing. For a horizontal sliding type of failure the conventional factor of safety can be calculated as
QH
FSs =- (29)
FH
where FH is the actual horizontal force and QH is the available horizontal frictional capacity for the
considered footing.
308

A two-dimensional frame model is assumed for the jack·up platform with the hull and the legs
modeled as beams with some structural stiffness. The trussed platform legs are represented as cir-
cular cylinders with an equivalent diameter, and equivalent drag and inertia coefficients are applied
in the analysis. Three foundation springs, OOJTeSponding to three degrees of freedom, are modeled
at each spudcan footing, namely one for vertical translation, one for horizontal translation, and one
for rotation. The soil is generally non·linear, and this non·linearity is represented in the analysis by
modeling the springs as hyperbolic load·displacement relationships for the mentioned three degrees
of freedom, see Figure 2.
F
,
-----i~ ~,~~~ -----: ::== ==

- - - J_ -r"'------
,,
displace-
ment

Figure 2 Hyperbolic load·displacement relationship for foundation springs

At low load levels, initial foundation stifl'nesses are calculated based on solutions for surface
foundations on elastic halfspaces in dependence of an initial shear modulus G for the soil, see Gaze-
tas (1983). The hyperbolic load-displacement relationships are modeled with these initial elastic
stiffnesses and are uniquely defined by assuming horizontal load asymptotes equal to 110.95=1.052
times the respective capacities. G is a difficult property to assess, and it is usually encumbered with
a significant uncertainty. Very little data are available for establishing statistical properties for G.
Rather than applying a complicated model for G in dependence of the stress conditions in the soil,
such as the model described by Whitman (1974), a simplified model is adopted with G represented
by a normally distributed variable with mean value E[G]=161000 kPa and standard deviation
D [G ]=16000 kPa. This is used together with a deterministic Poisson's ratio v=0.3. The use of this
simple approach to the representation of the shear modulus of the soil is justified later.
The loading of the platform, consisting of selfweight and environmental forces, is transferred to
forces and moments at the three spudcan footings. The final distribution of vertical forces, horizon-
tal forces and moments between the three footings depends on the stiffness of the structure in con-
junction with the stiffness of the three foundations, expressed in terms of the degree of mobilization
of the foundation capacity at each footing. This distribution of forces and moments is found by an
iterative numerical approach similar to the procedure described by Arnesen et al. (1988).
For a wave of a certain height and period, the factor of safety for a footing for a considered
failure mode varies with the position of the wave as it passes by the platform, and the factor of
safety FS associated with the wave is hence chosen as the minimum value of the factor of safety
occurring during this passing. Failure occurs when FS is less or equal 1.0.

5. RELIABILITY ANALYSIS
Reliabilities are computed by a first-order reliability method as described in Madsen et al.
(1986). The input to a reliability analysis consists of a limit state function in terms of a set of basic
variables which consists of stochastic variables as well as deterministic parameters. The statistical
distributions of the stochastic basic variables are given in the above sections together with the
values of the deterministic parameters.
For a probabilistic analysis of the foundation stability of the considered example jack-up plat-
form the limit state function is chosen as
g (X) = FSB (X)-1.0 (30)
when a bearing capacity failure of the single front footing is the critical failure mode for the plat-
form foundation. FSB is the conventional factor of safety pertaining to this failure mode for the
front footing, calculated as described previously. X denotes the stochastic variables, which govern
309

the stability problem, and which are described previously.


C0!T8Spondingly, the limit state function is chosen as
gOO =FSs OO-1.0 (31)
when horizontal sliding of one of the aft footings is the critical failure mode for the platform founda-
tion. FSs is the conventional factor of safety pertaining to this failure mode for the aft footings, cal-
culated as described previously.

6. RESULTS OF RELIABILITY ANALYSIS


The period of operation for the platform on the considered location is assumed to be D -1 year.
The long-term re1iability of the platform foundation against foundation failure during this period of
operation is sought. As a first step, the stability against a conventional bearing capacity failure
under the single front footing is considered, since this by far will be the most likely failure mode for
the foundation of the considered example platform. It is a standard approach to assume that the
long-term reliability can be estimated by calculating the probability of failure for the highest wave
that occurs in the most severe storm during the period of operation. This can be done directly by a
conventional reliability analysis, where the stochastic variables denoted as X are input, and the
results of such an analysis based on a first-order reliability method is shown in Table 1.
Table 1 Foundation stability of a jack-up platform
Beariru! caoacity failure under the front footin/t
Results of conventional long-term reliability analysis
Reliability index Ih =2.547
Probability of failure PF =<1>( -13r. )=0.543.10-2
Variable Distribution Design Sensitivity
point x • factor or
Significant wave height H 8 ,mcJX Extreme Weibull 13.62 m 0.9068
Mean zero-upcrossing period Tz Lognormal 12.84 sec 0.0054
Maximum wave height Hmax Extreme value 29.31 m 0.4122
Wave period T Longuet-Higgins 13.97 m 0.0134
Wind velocity UIO Weibull 29.91 m/sec 0.0573
Relative density Dr Normal 0.7186 0.0730
Friction angle coefficient a 1 Normal 19.45 0.0037
Friction angle coefficient a 2 Normal 25.75 0.0096
Friction angle error term f Normal 0.1314 0.0382·
Initial shear modulus G Normal 160710.00 kPa -0.0072

It is, however, reasonable to expect that also other sea states than the most severe storm will
contribute to the probability of failure under the considered front leg footing during the period of
operation for the platform. This requires solution of a series system of all sea states during the
period of operation, in this case N =1460 sea states. The problem can practicably be solved by a pro-
cedure which involves nested applications of reliability analyses in an iterative procedure, see
Bjerager et aL (1988) and Wen and Chen (1987).
The stochastic variables X are divided in two groups, Y and Z. Z are the system variables, i.e.
in this case the soil strength and stiffness variables, which are the same during all sea states. Yare
the sea state variables, and they are assumed independent from one sea state to another.
A given outcome z of Z produces a conditional failure probability for the critical spudcan footing
in an arbitrary sea state
(32)
A conditional reliability index (38 corresponds to this probability and is found by a reliability analysis
where Y is modeled as stochastic variables and Z=z is fixed,
(38 =-4>-l(PFz (z» (33)

The N sea states are assumed to be independent, and when the probability is conditioned on
310

an outcome z of the system variables Z, the corresponding conditional safety margins for foundation
failure will be independent. The conditional probability of failure during the N sea states in the
period of operation can hence be calculated as
PFZ,N(z) = 1-(I-PFz (z»N (34)

The total probability of failure during the N sea states is found by integration over all possible
outcomeszofZ
P F = jPFz,N(z)fz(z)dz (35)

By introducing an auxiliary variable U which is standard normally distributed, this probability
can be rewritten as
(36)
see Bjerager et a1. (1988). Insertion of the expressions from Eqs. (33) and (34) in Eq. (36) yields
PF = P[U +4>-1(<II(.8s (Z»N):::;O] (37)
PF is then solved by a first-order reliability analysis under application of a limit state function
h = U +4>-1 ( <II(.8s (Z»N) (38)
such that
(39)
and the colTesponding unconditional long-term reliability index: is .8L =-4>-l(PF ).
U and Z are represented as stochastic basic variables, and.8L can be solved provided the par-
tial derivatives of .8s with respect to Z can be computed. These derivatives are equal to the
parametric sensitivity factors which can be obtained as byproducts of the first-order computation of
.8s as described above, see Madsen et a1. (1986). The procedure for solution of PF and.8L is hence a
nested application of first-order reliability analyses, and this procedure is iterative in the sense that
it has to be repeated until the conditional short-term reliability index:.8s is calculated for a fixed set
of the system variables Z=z equal to the design point Z=z· pertaining to .8L .
The conditional probability PFz(z) is here calculated by a first-order reliability method. Because
such a first-order result represents an approximation to the true probability it may be encuntbered
with an elTor. This elTor will in general be blown up systematically by the exponentiation to the
power N and will thus give an error in the probability PFZ,N(z). However, as long as P Fz (z)«I/N,
the relative error in PFZ,N(z) due to an elTor in PFz(z) will equal the relative elTor in PFz(z), because
PFZ,N(z) can then be approximated by NPFz(z). Otherwise the relative error in PFz,N(z) will be
larger.
The result obtained for the long-term probability PF by the nested reliability analysis will be
sufficiently accurate if the result for PFZ,N(z) is considered to be sufficiently accurate. For the
present example the accuracy in PFZ,N(Z) as determined by a first-order reliability method is con-
sidered to be sufficient because PFz (z)«lIN is fulfilled. Problems may exist where this condition is
not fulfilled, and more accurate methods than a first-order reliability method are then required for
calculation of PFz(z), These methods can be importance sampling methods, such as axis-orthogonal
simulation, see Bjerager (1989). For problems with a low dimension of the variable space in the con-
ditional reliability analysis, numerical integration of the probability PFz(z) may form an attractive
approach.
For the example problem the results of the nested long-term reliability analysis are shown in
Table 2.
It is noticed that the long-term reliability index: .8L =2.526 calculated by the nested reliability
analysis is only slightly less than the index: .8L =2.547 obtained from the conventional reliability
analysis. This result is very reasonable for a failure problem which is as dominated by the highest
wave during the time of operation as this problem is.
311

Table 2 Foundation stabilitY of a jack·up platform


BeariIur canacitv failure under the front footiDR
Results of nested long·term reliability analysis
Reliability index Ih -2.526
Probability of failure PF=(J(-/3L )~.578·10-2
Distribution
Conditional Unconditional Design
Variable point x·
shcrt·term long·term
analysis analysis
S~ntwavehmghtlls Weibull Not included 13.47 m
Mean zero-upcrossing period Tz Lognormal Not included 12.76 sec
Maximum wave heightllmax Extreme value Not included 29.32 m
Wave period T Longuet·Higgins Not included 13.88 sec
Wind velocity u 10 Weibull Not included 29.74 m/sec
Relative density Dr Fixed (condition) Normal 0.72
Friction angle coefficient a 1 Fixed (condition) Normal 19.44
Friction angle coefficient a 2 Fixed (condition) Normal 25.75
Friction angle error term € Fixed (condition) Normal 0.13
Initial shear modulus G Fixed (condition) Normal 160700.00 kPa
Auxiliary variable U Not included Normal ·2.515
The so-called design point u· =(30 gives the most likely values of the stochastic basic variables
at failure expressed in the transformed normal space, see Madsen et al. (1986). The corresponding
point in the space of stochastic basic variables X is shown in the third column of Table 1 and in the
fourth column of Table 2.
It appears that the design point for the system variables implies a design friction angle I/>
whim is slightly higher than the expected value. This means that the most likely value of I/> at
failure is higher than the expected value. This may seem strange, but the explanation is that a
high I/> value implies a small penetration depth in the installation phase, and this leads to a small
contact area between the foundation footing and the soil. With respect to the bearing capacity of a
footing, the conlbination of a small contact area and a high I/> is hence more likely at failure than a
conlbination of a large area and a low 1/>.
It also appears that the design point value of the initial shear modulus G is very close to the
expected value. This implies that the uncertainty in G does not contribute mum to the total uncer·
tainty. This justifies the somewhat simplified model whim is used for representation of the initial
shear modulus in the present case.
The long·term reliability index f3L =2.526 is fairly low, corresponding to a failure probability of
about ~%. However, this is not an unexpected result when considering that a traditional installation
procedure is applied where the platform footings are penetrated solely by means of a preload whiclt
is not of any particularly higher magnitude than the design environmental loads. The consequences
of a bearing capacity failure under one footing is not severe as long as there is only partly contact
between the spudcan area and the sand as in the present case. Sum a failure under one footing will
then just cause a sudden additional vertical displacement of the footing to take phtce until an
increased contact area is achieved with an increased bearing capacity. The failure is hence of a self·
stabilizing nature. The sudden additional footing displacement during failure can be conlpensated
by additional jacking, but is of course still not desireable. It is noticed that in soils where spudcans
are fully embedded after installation, the consequences of a bearing capacity failure under a footing
may be more severe without any stabilizing effect of the additional penetration caused by the
failure.
The conventional reliability analysis presented in Table 1 gives sensitivity factors a whim give
an indication of the relative importance of the individual stocltastic variables. Ct;2 roughly gives the
fraction of the total uncertainty whiclt is caused by the uncertainty in the i 'th stocltastic basic vari·
able. It appears that the major contribution to the total uncertainty by far is caused by the uncer-
tainty in the wave loading, whereas the uncertainty in the system variables for soil strength and
stiffness is only of minor importance. It is reasonable that the uncertainty in the wave loading plays
312

a dominating role, because the maximum wave load in a I-year time span as considered here is
much more uncertain than the maximum wave load in a typical lifetime of 50 years for a fixed
structure. An insignificant importance is found for the uncertainty in the wind loading conditioned
on the sea state. This may serve to justify the disregard of a wind gust superimposed on the
modeled average sustained wind. The insignificant role played by the uncertainty in the friction
angle is understandable, because the effect of the uncertainty in the friction angle is counteracted by
a corresponding adjustment of the contact area in the installation phase. Sensitivity factors /l' have
not been interpreted from the nested reliability analysis. However, it is reasonable to expect that the
sensitivity factors /l' resulting from the conventional reliability analysis are representative also for
the nested analysis, referring to the very close results obtained for the reliability from these two
analyses.
Some of the parameters governing the foundation stability of the considered jack-up platform
have not been modeled as random variables, but as constants. Some of the basic variables are
described by subjectively chosen distribution parameters rather than by statistical estimates of
mean values and standard deviations. It is therefore of interest to study the sensitivity in the relia-
bility index (3 to changes in these parameters. Such sensitivities are output from the analysis in
terms of {}(3/{}p, wherep is the parameters in question.
One basic variable which is described by a subjectively chosen set of mean value and standard
deviation is the relative density Dr which governs the friction angle ¢ of the foundation sand. In the
probabilistic analysis a mean value IlD, =0. 7 is used together with a standard deviation aD, =0.1. The
following sensitivities in the reliability index (3 to changes in these two distribution parameters are
computed,
~=---O.73 !l{}(3 =---0.14 (40)
ollD, UaD,

It appears that the reliability is not excessively sensitive to the choice of any of these two distribu-
tion parameters when considering that reasonable changes of their values lie within the range say
0.1-0.2.
The analysis is carried out for a period of operation of one year for the example platform at the
considered site. Other periods of operation may be of interest as well, and a series of additional ana-
lyses has therefore been carried out for other time spans than one year, i.e., both nested reliability
analyses with integration of probability contributions from all sea states and conventional reliability
analyses with probability contribution only from the worst 6-hour sea state. The results of these
additional analyses are shown in Figure 3 in terms of the reliability index (3 versus the period of
operation D . RELIABILITY
INDEX {1

l.O

2.5
... ,i~WORST CONVENTlONAL
NALYSIS

-- --
.... "" SEA STATE ONLY)

NESTED
'-,...
2.0 ~~ttYm STATES)

T PERIOD OF
or--"'--,;---r--.--...s--" gp~~~m~
Figure 3 Reliability index (3 vs. period of operation D

It appears from Figure 3 that even for longer periods of operation than one year, the worst 6-
hour sea state gives the major contribution to the total failure probability, as there is not much
difference between (3 computed by the nested analysis and (3 computed for the worst sea state only.
It is noticed, however, that this difference is seen to increase slightly for increasing period of opera-
tion, and it is verified that there is a correspondingly increasing difference in the failure probability.
This is a natural result since the longer the period of operation, the more sea states contribute to
the total failure probability in the nested analysis.
313

The analysis results presented above are all pertaining to analyses carried out for the con-
sidered most likely failure mode for the example platform foundation, namely a bearing capacity
type failure under the single front footing. As stated above, horizontal sliding of one or both of the
two aft footings may be an alternative failure mode, and fully analogue reliability analyses are
therefore carried out also for this failure mode. The result of the conventional reliability analysis,
where only probability contributions from the worst sea state in one year is considered, is a failure
probability PF =O.2467·10-3 with a corresponding reliability index .8L=3.484. The result of the
nested reliability analysis, where the probability distributions from all sea states in one year are
integrated, is a failure probability of PF =O.2571·lO-3 with a corresponding reliability index
.8L =3.473.
It is noticed that the long-term reliability index .8L as calculated by the nested reliability
analysis is just slightly less than the reliability index obtained from the conventional reliability
analysis, so also for horizontal sliding it can be concluded that the major contribution to the total
failure probability comes from the worst sea state during the period of operation for the platform. As
for the bearing capacity failure, the predominant uncertainty source is found to be the uncertainty
in the environmental loading. The reliability index in excess of 3 implies a much smaller probability
of failure for the horizontal sliding type of failure than for the bearing capacity type of failure, and
this index can be assessed to be acceptably high as structural codes generally result in designs with
a reliability index between 3 and 5, see Madsen et al. (1986). In this context it is worthwhile notic-
ing that the consequences of horizontal sliding of one of the platform footings are much more severe
than the consequences of a bearing capacity failure under such a footing. Whereas the bearing capa-
city type of failure is of a self-stabilizing nature with little or no consequence for the platform struc-
ture, horizontal sliding of a platform footing may expose the platform legs to excessive stresses and
result in structural damage or even capsize of the platform.

7. CONCLUSIONS
A procedure for calculation of the long-term reliability of a jack-up platform foundation has
been presented, based on deterministic methods for evaluation of foundation stability in conjunction
with a nested application of first-order reliability analyses. A probabilistic analysis of the foundation
stability of an example platform founded on a sand in 70 meters of water has been performed
according to this procedure. In this analysis uncertainties were included in sea state, wave and wind
properties as well as in soil strength and stiffness properties. Two possible failure modes were con-
sidered, namely a conventional bearing capacity failure of one platform footing, and a horizontal
sliding of such a footing. The former was found to have the highest probability of occurrence,
whereas the latter was assessed to be the most dangerous for the platform structure.
The uncertainty sources have been studied through an analysis of the output from the reliabil-
ity analysis. It is shown that the major uncertainty is due to uncertainty in the determination of
the environmental properties, whereas the uncertainty in the soil strength and stiffness properties is
of only minor importance.
The sensitivity in the calculated failure probability for changes in subjectively chosen distribu-
tion parameters has been interpreted for a few example parameters, thereby to assess the impor-
tance of making proper choices of parameters which are modeled as fixed values in the analysis.
Many properties govern the foundation stability of a jack-up platform, but only the sea state,
wave and wind properties and the soil strength properties have been modeled as random variables
in the presented example study, thereby to limit the number of stochastic variables and reduce the
computer time to an acceptable level. In a more comprehensive and more detailed probabilistic
analysis than the one presented here, it will be natural to model also other properties as random
variables, first of all the wind and current properties and the equivalent drag and inertia coefficients
for use in the load calculations for the platform legs. A stochastic representation of model uncertain-
ties related to failure modes and foundation spring models will also be of major interest in such an
extended analysis.
A three-dimensional platform model rather than the two-dimensional model used in this study
will allow for consideration of the directionality of the loading as well as the possibility of failure
under other footings than the single footings which are critical when the direction of the loading is
prescribed.
314

Close consideration should be given to the assumption of a Gaussian sea elevation process as a
basis for the the calculation of the wave loading. This is because recent research results indicate
that this assumption may not hold, even for such deep waters as considered here. A possibly non-
Gaussian sea elevation process will have impact on the wave height calculation as well as on the
sinusoidal wave assumption made for the force calculation.
Finally, dynamic amplification of the force response can be significant due to the large flexibil-
ity of the jack-up structure, especially in deep waters such as in the presented example, and should
therefore be included in a future analysis.

8. ACKNOWLEDGMENTS
This paper is based on work performed within the research program ''Reliability of Marine
Structures", which is supported by A.S Veritas Research, Saga Petroleum a.s., Statoil and Conoco
Norway Inc. This contribution is gratefully acknowledged. The opinions expressed in the paper are
those of the author and should not be construed as reflecting the views of the sponsoring companies.

9. REFERENCES
[1] Arnesen, K., Dahlberg, R., ~eey, H., and Carlsen, C.A., "Soil-Structure Interaction Aspects
for Jack-Up Platforms", Proceedings, 5th International Conference on Behaviour of Offshore
Structures, Trondheim, Norway, 1988.
[2] Bitner-Gregersen, E.M. and Haver, S., "Joint Long Term Description of Environmental
Parameters for Structural Response Calculation", Proceedings, 2nd International Workshop on
Wave Hindcasting and Forecasting, Vancouver, B.C., Canada, 1989.
[3] Bjerager, P., ''Probability Computation Methods in Structural and Mechanical Reliability", in
Computational Mechanics· of Probabilistic and Reliability Analysis, ed. W.K. Liu and T.
Belytschko, Elme Press International, Lausanne, Switzerland, 1989.
[4] Bjerager, P., LflSeth, R., WintersteiIi, S.R., and Cornell, C.A., ''Reliability Method for Marine
Structures under Multiple Environmental Load Processes", Proceedings, 5th International
Conference on Behaviour of Offshore Structures, Trondheim, Norway, 1988.
[5] Det norske Veritas, ''Rules for the Design, Construction and Inspection of Offshore Structures,
Appendix F, Foundations", Det norske Veritas, HIMk, Norway, 1977.
[6] Gazetas, G., "Analysis of Machine Foundation Vibrations: State of the Art", Soil Dynamics
and Earthquake Engineering, Vol. 2, No.1, 1983.
[7] Holm, C.A., Bjerager, P., and Madsen, H.O., "Long Term System Reliability of Offshore
Jacket Structures", Proceedings, 2nd IFIP Working Conference on Reliability and Optimization
of Structural Systems, ed. by P. Thoft-Christensen, London, England, Springer-Verlag, 1988.
[8] Longuet-Higgins, M.S., "On the Joint Distribution of Wave Periods and Amplitudes in a Ran-
dom Wave Field", Proceedings of the Royal Society of London, Vol. A389, pp. 241-258, 1983.
[9] Madsen, H.O., Krenk, S., and Lind, N.C., Methods of Structural Safety, Prentice Hall Inc.,
Englewood Cliffs, New Jersey, 1986.
[10] Ronold, K.O., ''Random Field Modeling of Foundation Failure Modes", Journal of Geotechnical
Engineering, ASCE, Vol. 116, No.4, pp. 554-570, 1990.
[11] Wen, Y.K. and Chen, C.H., "On Fast Integration for Time Variant Structural Reliability", Pro-
babilistic Engineering Mechanics, Vol. 2, No.3, pp. 156-162, 1987.
[12] Whitman, R.V., ''Representation of Soil-Structure Interaction for Offshore Gravity Structures",
Massachusetts Institute of Technology, 1974.
CONSTANT VERSUS TIME DEPENDENT SEISMIC DESIGN COEFFICIENTS

Emilio Rosenblueth & Jose Manuel Jara


Centro de Investigation Sismica
Camino al Ajusco 203, Mexico, DF, 014200 Mexico

Abstract

We analyze two kinds of problem. In both we deal with structures whose design is
governed by earthquakes generated by a non-Poisson process. One problem
concerns structures designed in a supposedly optimal way but using a simplified
probability distribution of major-earthquake interoccurrence times. The second
problem consists in evaluating the expected loss caused by maintaining the design
coefficients constant in a building code intended to remain in effect over a given
number of years. The purpose of the first type of problem is to have bases for
selecting the simplified model. That of the second type is to guide in deciding
how often to change the coefficients in a building code.
We illustrate both kinds of problem through structures potentially subjected
mainly to subduction earthquakes from either a source not having had characteristic
earthquakes for several decades, or from one that produced such earthquakes four
years ago.
Interoccurrence times of characteristic events are assigned lognormal distri-
butions with uncertain parameters, based on a bayesian analysis. Other seisms are
taken as Poisson generated. We find that either a lognormal or an exponential dis-
tribution, both with parameters adjusted to give the exact answer about five years
from now, are adequate for the design of all structures to be built within the next
ten years. However, use of a Poisson model with the mean occurrence rate leads to
excessive losses.
Expected losses due to invariability of building-code coefficients are found
to increase practically with the square of the time during which the coefficients
remain constant.

Introduction

Unless disturbances arrive as though generated by a Poisson process, optimum


design parameters are functions of the time of retrofit or construction. Large
earthquakes tend to occur periodically. Therefore, design hazard rates should
depend on the time since the last major earthquake. Since probability distribu-
316

tions of interarrival times must be partly based on empirical data, they contain
uncertain parameters. Calculation of optimal parameters is then burdensome.
Hence the desirability for distributions with fixed parameters, provided they do
not entail too large errors.
Using worldwide data and those from Mexican subduction earthquakes, lara and
Rosenblueth (1988) found that a lognormal distribution of large-event interoccur-
rence times was most satisfactory. Its recurrence period had coefficient of varia-
tion 0.22. On Mexico City's soft clay the threat of a major earthquake from the
Guerrero gap will dominate design until the next such event. Taking into account
smaller earthquakes, the authors found that a lognormal distribution with fixed
parameters was satisfactory for structures to be built within ten years following
the 1985 shocks. They indicated that a constant hazard rate would be adequate when
events caused by several sources have comparable importance, as such events are
nearly independent (G Grandori, private communication, 1984). That study did not
recognize uncertainties in the attenuation and site effects nor in structural ca-
pacity.
Cornell and Winterstein (1988) have pointed out that a constant hazard func-
tion will be adequate when the recurrence period of structural damage or collapse
is very uncertain and there are several potential sources of significant quakes.
Here we examine the consequences of having bUilding-code design coefficients
remain in effect for several years. We also explore the effects of the time elapsed
since the last major earthquake, of the uncertainty in recurrence period attenua-
tion and structural capacity, and of the existence of many significant seismic
sources.

Utilities and economic set-up

When society is the subject for whom we wish to optimize, we shall assume that,
locally, utility is linearly related with monetary gains and losses; we shall dis-
count future utilities to obtain their present values by multiplying them by exp
(-rtJ where t is time and r a constant discount rate. We shall take r=O. 05/yr.
This is consistent with the average rate in major monetary transactions within the
last several decades, after correcting for inflation.
A study of a few ten-story reinforced concrete structures (Vargas and lara,
1989) gave as initial cost

C C max {1, [1 + c(~ - alb)} (1)


a

where Co is a constant, ~ is design base shear coefficient, a=O.05, b=l.l and c=1.4.
317

We express expected earthquake loss as a function of the probability of col-


lapse, ie the probability that the actual base shear coefficient will exceed a
structure's actual strength.
The following expression is based on statistics of earthquake losses in Mexico
City (Esteva et at, 1988; Ordaz et at, 1989),

L (2)

where L is expected loss at the time that an earthquake occurs,

p (z) = <1>(_1 In _ z) (3)


Z 0' lnZ IDz

is the cumulative distribution function of Z taken to be lognormal with median m


and standard deviation O'lnZ' <1>(.) is the standard normal distribution function,
z=x/y, x is the design base shear coefficient and y is the computed base shear co-
efficient acting on the type of structures of interest. Y is the actual response
acceleration associated with the structures' fundamental period of vibration and
damping ra t io and corrected for nonlinear structural behavior, torsion,
multiplicity of degrees of freedom and soil-structure interaction. From experience
(Esteva and Ruiz, 1989; Ordaz et at, 1989) we Z=-6.6 and take
O'lnZIA= 0.82 for
ffi

these structures. L includes the expected direct and indirect economic aRd material
losses to society, what society loses because some buildings cease to function, or
to do so properly, and the value that society places on loss of life and limb and
on suffering. In eq 2, the term linear in Pz(z) accounts for direct economic
losses.
If t were known, the expected present value of earthquake losses at the time
of construction would be D=L exp [-~(t-to)l where to is the time elapsed between
the last characteristic earthquake and the structure's construction. Since t is
uncertain we write

D L f; fr(t)e-~(t-to) dt (4)
o

where

(5)

is the hazard function, Pr(t) the density function of rand

the probability that a new quake has not occurred up to time to.
318

Sometime after a structure collapses, is demolished or suffers damage, it


is rebuilt or retrofitted. It will then be exposed to additional earthquakes. The
present values of the ensuing losses should be included in D. However, if our at-
tention centers on major earthquakes, this constribution is small compared with the
effects of the first macroseism to occur after construction (Rosenblueth, 1976).
The additional contributions will be taken into account in our computations.
The optimal x is obtained by solving

d
dx (C + D) o (7)

Let this value be xo. The optimal utility is some constant minus C(xo)+D(xo ).
Consider a structure to be built at time t~, designed optimally using a
simplified distribution whose parameters have been adjusted to give the same
optimum design at some time to as provided by our present state of knowledge. If
the earthquake generating process is Poisson and the probability distribution of
the characteristic-earthquake magnitude is time independent, or if t~= to' then
there is no utility loss for any distribution that we assign the time between
characteristic earthquakes, nor if we use the compound model that synthesizes
our present state of knowledge.
The second problem concerns a building code that is to remain in effect during
ten years or unless a major earthquake occurs earlier. We take the construction
rate of each structural type as constant during this interval. We shall express
utili ties as their present value at the time t c that the code is approved,
shift the time origin by an amount t~ and translate to present values at t~. (This
entails negligible errors in view of the small probability that the next major
earthquake occur before the ten years are up.) In solving this second problem we
use only the compound model associated with our present state of knowledge.
We will now assume that losses not exceeding 2.5 x 10-3 C are negligible
o

--
when due to using a simplified probability distribution of the characteristic-
earthquakes interoccurrence times.

I
/
)
0.02

3 s/ES 4

Fig 1 Probability density Mction of major-earthquake Fig 2 Hazard function for lognormal distribution
interoccurrence times with O'lnS =0.39
319

Earthquake statistics

For our site there is one dominant source of earthquakes. Its characteristic
events occur at random time intervals, which we shall call S, having a bimodal
probability density function (Jara and Rosenblueth, 1988) (see fig 1). the density
can be regarded as the weighted sum of two unimodal ones, the first nearly uniform
(Hong, 1988); the second a lognormal distribution. The first distribution is of
interest for emergency decisions; the second one is more relevant in structural
design; we will confine our attention to it. Its hazard function is as in fig 2.
The slow decrease in fT(t) for large t/ES (ES=expected S) may be counterintuitive
and make the lognormal distribution suspect (Suzuki and Kiremidjian, 1988). How-
ever, our intuition does not go far in problems of this nature, so the hazard
function should perhaps descend for large t/ES; intuitively it should for
extremely long t. Also, in the applications that now interest us there is no case
in which t/ES approaches 2.5, so the question is purely academic.
This distribution has two parameters. We choose these to be the median mS
and the standard deviation ~lnS.

We are uncertain about major-event magnitudes, their relation to the peak ac-
celeration on hard ground and between this acceleration and what we may call effec-
tive spectral acceleration. To study this matter we shall resort to a model that
incorporates an estimate of the uncertainty in the relation between Hand Y and in
that between Y and the actual structural strength.
Based on a regression analysis of 16 subduction earthquakes Singh et at
(1987a) have proposed the relation

2.404 - 2.976 log R + O.429H (8)

where Ao is peak ground acceleration, as a fraction of gravity, at University


City (wi thin the Valley of Mexico), R is the closest distance in kilometers
from this point to the rupture area and H is surface-wave magnitude. Given Hand
R, In Ao is assigned a normal distribution. Its sample standard deviation is 0.37.
There are good reasons (Singh et at, 1988) in Michoacan and Guerrero
earthquakes for taking Fourier spectral amplitudes and response spectral ordinates
in the valley proportional to those at University City (save at spots of very soft
clay, where for major earthquakes there seems to be some large-scale nonlinear soil
behavior). This does not mean that throughout the valley they are proportional to
Ao' for the frequency content changes from one earthquake to another depending on
source mechanism details (Singh et at, 1987b).
From the Mexico 1985 response spectra for a natural period of 1.2 sand 5 per-
cent damping, using a ratio of the medians of Y and Ao of 5, typical of the lake
320

bed, and applying eq 8 we find

E log Y 3.103 - 2.976 log R + 0.429H (9)

and UlnYIH,R = 0.37, with normal In YIH,R.


No correlation was found by Jara and Rosenblueth (1988) nor in the physically-
based model of Hong's (1988) between the magnitude of a major earthquake and the
time to the next one. From regression, Jara and Rosenblueth established the
following relation between the time since the last major event and the expected
magnitude of the next one,

(10)

where si+1=t i _ 1-t i in years (fig 3). This expression is consistent with results
obtained from Hong's model through simulations. We thus deal with a "slip predict-
able" model (Shimazaki and Nakata, 1980; Kiremidjian and Anagnos, 1984). Empiri-
cally it was concluded that H could be assigned a gaussian distribution with
uHIS= 0.27.

8.6,------------------,,.....------,

7.6

7.4 1 10 100 1000


Time elapsed since the last characteristic earthquake (years)
Fig 3 Expected magnitude of seoond macroseism in a doublet

For a lower limit to U lnZ we shall only take into account the uncertainties in
2 2 1/2
H and in the relation between Hand Ao: u lnZ>[(0.27/2.30) +0.37 I =0.4. Incorpo-
rating uncertainties in the relation between Ao and Y and in X we find that u lnZ
must also exceed (0.42+0.82 2)1/2=0.9, for individual variances were obtained exclu-
sively from samples. As a reasonable upper limit we shall adopt 1.3.
Bayes' theorem was applied twice to determine the probability distribution of
parameters in the lognormal distribution of interoccurrence times for each of sev-
eral Mexican subduction segments. The first step began with a rather diffuse
prior of the expectation and standard deviation of In S. This was based on world-
wide data on subduction earthquakes. Data in the first update were those for the
Mexican Pacific Coast. Data for a particular segment served to update the second
time.
321

When optimizing a building code we are interested in structures to be built at


different times after the code is promulgated. For those to be erected soon there-
after, the last piece of information is that there were no macroseisms since the
last one and until that date. For structures built later we know besides that no
macroseism occurred between promulgation of the code and erection of the struc-
ture. For these there is a lengthening of the major-event return period, so much
more pronounced the less abundant the original data.

Michoacan and Guerrero segments

We shall concentrate on the Michoacan segment, where the 1985 shocks originated,
and on the Guerrero gap, West of Acapulco, which has been quiescent since November
1911.
In addition to the major earthquakes generated at these segments, we shall
recognize shocks of smaller intensity in Mexico City to optimize design base shear
coefficients. These shocks come from many essentially independent sources. We
will treat them as though generated by a Poisson process. For each source we may
write the magnitude exceedance rate as proposed by Cornell and Vanmarcke
(1969) :

-1
A(M) A(M1 ) ( e -f3M-e -f3M)
u ( e -f3M1 -Mu ) (11 )

if M~Mu' where M1 is a threshold magnitude, Mu is the maximum magnitude that can be


generated at that source and f3 is a parameter that for these earthquakes can. be
taken as 2.4 (Rosenblueth and Ordaz, 1986). Under the assumption that eq 9 holds
for earthquakes from all sources we find that the exceedance rate of X is
approximately proportional to

-0.429/3 In 10 -0.429/3 In 10 -2.37 -2.37


X - Yu X - Yu (12)

where Yu is the value of


Yu associated with Mu' (The relation is not exact
because of the uncertainty in the relation between Mu and Yu') The exceedance rate
of X due to earthquakes from all the sources is the sum of the rates for the indi-
vidual sources. Assuming that yu=0.26 is the same for earthquakes from all the
sources and using statistics from earthquakes in Mexico City we find that the year-
ly exceedance rate of an effective design spectral acceleration x times gravity is
322

>.(x) (13)

if x~0.26 according to eq 4 the contribution of the smaller shocks to the expected


present value of earthquake losses is

>.(x) (14)
D
.+>.(x) L

We introduce a negligible error by taking the characteristic-earthquake and the


POisson-process contributions to D as additive.

Results

Consider the present (1989) situation on Mexico City soft clay. Design will be
governed by the threat of the next Guerrero gap earthquake. We find ES=56.5 yr.
The optimum x for structures built about five years from now (some 83 yr after the
last major event) is 0.212 when ~lnZ=1.3.

We now retain the dependence of EM on S and adjust mS and ~lnS of a fixed-


parameter lognormal distribution so that ~ =0.212 when t ;5 yr. We get mS=48.0 yr,
o ~

~lnS=0.380 (equivalently, ES=51.6 yr, vS=0.39, where v denotes coefficient of vari-


ation) .
With these parameters we compute ~o as a function of time and find the expect-
ed losses due to use of this simplified model. These losses, referred to Co' are
shown in fig 4. There is an infinite number of combinations of the parameters
which will furnish zero utility loss at t ;5 yr. Losses are so small, compared
s
even with an engineer's fee, that we can neglect them and say that this model is
adequate for the design of structures to be built during the next ten years at
least.
Next we repeat our calculations using a fixed exceedance rate adjusted so that
~0=0.212 while again retaining the relation between EM and S. The corresponding
characteristic-earthquake exceedance rate is 0.061 yr- 1 Results are also depicted
in fig 4. Losses are again quite negligible.
These and other results are displayed in table 1. They correspond to the
situation described but with ~lnZ=0.4 as well as 1.3, EM time independent (chosen
to minimize utility loss) as well as a function of S, both lognormal and expo-
nential distribution of S with fixed parameters chosen to nullify the utility loss
at t ;83 yr, and exponential distribution with >'=l/ES. The same table contains the
s
results obtained for these combinations of hypotheses under the assumption that no
earthquakes occur at the Guerrero gap, so the Michoacan-gap characteristic earth-
323

quakes dominate design. However, IT InZ=O. 4 does not appear in the table since
~0=O.05 and there is no need to design against earthquakes in this case.

5xIO"r---------r--------,---------,---------r--------,
Loss/Co

r Exponential distribution
, I
3xIO·~--~~-A---------4---------~--------r_--~~-1

IltI06r-------~--~~,~-_t--------_t----7S~~--------~

a 78 80 88

Fig 4 Guerrero; <TIn Z = I. 3

Table 1 Optimum design of individual structure

Source of' IT lnZ Optimum Optimum Interoccurrence Maximum


domimanl base shear occurrence time probab EN utili ty
-1
earthquakes coeff'lclent rate, yr distribution loss/C
0

-8
Guerrero 0.4 0.066 Lognormal f(s)+ 4.0x10

-7
0.059 Exponential f(s)+ 1. Ox10

-4
0.065 8.15 3.4x10
-3
0.018· 1. 5x10

f(S)+ -6
1.3 0.212 Lognormal 4.0xlO
-6
0.061 Exponential f(s)+ 4.5x10
-4
0.060 8.15 1. 4x10
-2
0.018· 5.1x10

-3
Michoacan 1.3 0.085 Lognormal f(s)+ 1. 5x10

f(s)+ -3
0.011 Exponential 1. 6x10
-3
0.012 7.50 2.1x10
-2
0.020· 1. 3xlO

• Reciprocal of recurrence period; not optimum. + Function of S .

These results indicate that both a lognormal and an exponential distribution


of the interoccurrence time, with fixed parameters, are adequate for structural de-
sign over a period of at least ten years, making the expected magnitude either time
dependent or constant, provided parameters are adjusted to optimize design near the
midpoint of the ten-year interval. However, a Poisson process with the mean occur-
rence rate using a constant El1 leads to excessively large losses.
When drafting an optimum building code that will remain in effect over ten
years unless there is an earlier, major event we assume that all structures to be
324

built during this period will be designed for the values of a:o we found in the
foregoing paragraphs associated with t ~5 yr. Total expected losses over the ten-
o
year period are given in table 2 in terms of ~Co=yearly value that would be built
of the structures of interest if they were not earthquake resistant. Losses were
computed using lognormal distributions with uncertain parameters for the interoc-
currence times. The structures we analyzed are representative of all structures
to be built during one year in the city and requiring seismic design. In gauging
the cost of keeping design coefficients constant, ~Co must therefore comprise all
such structures, so it amounts to several thousand times the cost of a single
structure.
Table 2 Code coefficients in effect for ten years

Source of .1 ITlnZ Total utility


dominant quakes
I 10ss/~C
0

Guerrero 0.4 1. 19 x 10- 3


1.3 4.23 x 10- 4

Michoacim 1.3 7.05 x 10 3

If design coefficients are optimal near the midpoint of the time span during
which the code is expected to be in effect, the total expected losses will vary
very nearly as the square of that time span. Compared with the results in table 2,
the cost of producing sets of design coefficients at intervals much shorter than
ten years is insignificant. The important utility loss would come from resistance
on the part of the profession, mostly due to a loss of credibility. Paradoxically
it seems reasonable to change the code coefficients every ten years or so, for the
conditions we live at present, under the threat of the Guerrero macroseism, while
if the main danger were from the remote Michoacan event the code ought perhaps to
change every four or five years. The alternative of having the building code spec-
ify time-dependent design coefficient does not seem realistic.
In all the foregoing computations the expected present value of losses due to
the Poisson-process background noise amounts to about 5% of the total. Since re-
sults could be affected by this ratio, computations were repeated with a ten-fold
increase in the Poisson-process losses. We arrived at the same conclusions.
The sensitivity of our results to the time of construction and to that of code
implementation will be more pronounced when and if times of earthquake occurrence
can be better foretold.
325

Concluding remarks

We have examined the choice of design coefficients when earthquakes that govern de-
sign are not generated by a Poisson process. Earthquakes that reach a given site
are idealized as consisting of two groups. The first comprises moderate-intensity
events. They include moderate-magnitude nearby seisms as well as characteristic
earthquakes from several distant sources. They are idealized as produced by a
Poisson process. The second group consists of intense characteristic quakes
generated at one source and having a time dependent occurrence rate.
For illustration we concentrate on structures on Mexico City soft clay, sub-
jected to background noise plus either the characteristic events from the Guerrero
gap and those from the Michoacan segment or only to the latter. Time selapsed (in
1989) since the last macroseism are 78 and 4 yr, respectively. The corresponding
recurrence periods are 56.5 and 50.0 yr. For each source we have taken a lower and
an upper limit of the overall standard deviation that includes uncertainties in the
relations between magnitude and peak ground acceleration on hard ground and between
the latter and the actual base shear coefficient as well as in structural
strength.
We consider two kinds of problem. The first concerns structures designed
optimally, not according to a code. We compare results of using either a fixed-
parameter lognormal or an exponential distribution of interarrival times of the
high intensity earthquakes, rather than a lognormal distribution with uncertain
parameters, which represents our present state of knowledge. Parameters of the
simplified probability distributions are adjusted so we get the correct design
base shear coefficients for structures to be built about five years hence. With
both simplified probability distributions we find that utility losses are insignif-
icant for structures to be built at any time within the next ten years. However,
using the mean occurrence rates, losses are excessive, even in the case of maximum
uncertainty.
The second kind of problem concerns a building code that will remain in effect
over a given number of years unless a major earthquake should strike earlier, and
then the code would be changed. We find out the implications of keeping the design
coefficients constant over this period. Using the same earthquake statistics and
parameters as for the first kind of problem we find the expected losses relative to
a code that would vary the design coefficients continuously and optimally as func-
tions of time of construction. These losses vary approximately as the square of
the time during which the code is expected to remain in force. Results should
be compared with the cost of implementing this sort of code and with that of chang-
ing it at more frequent adopting time intervals or time dependent design coeffi-
cients.
326
Ignoring the time elapsed since the last major earthquake is unacceptable in
any case, even with the largest uncertainty assigned to the relation between magni-
tude and structural response.
When and if we have better methods for foretelling occurrence times,
explicit consideration of the time dependence of design coefficients will acquire
paramount importance.

Acknowledgements

This paper is essentially based on Jara and Rosenblueth (1989). It was partially
sponsored by Mexico's Federal District Department and the National Council for
Science and Technology (CONACyT).
We are grateful to Mario Ordaz for his valuable contributions and constructive
criticism of the original manuscript.

References

Cornell, C A and Vanmarcke, E (1969), "The major influences on seismic risk", Proc
IV World Conference on Earthquake Engineering, Santiago de Chile, Chile

Cornell, C A and Winterstein, S R (1988), "Temporal and magnitude dependence in


earthquake recurrence models", Bull Seism Soc Am, Vol 78, 1522-37

Esteva, L et a1 (1988), "Costos probables de dafios causados por temblores en cons-


trucciones", informe del proyecto 8750, Instituto de Ingenieria, UNAM, Mexico

Esteva, Land Ruiz. S E (1989). "Seismic Failure rate of multistory frames". Jou:rnal
of Structural Engineering ASeE, Vol 115. 268-284
Hong, H P (1988), "Modelo de generacion de temblores de subduccion", Doctor's
thesis, Facu1tad de Ingenieria, UNAM, Mexico

Jara, J M and Rosenblueth, E (1988), "Probability distribution of times be-


tween characteristic sUbduction earthquakes", Earthquake Spectra, Vol 4, 3, 499-
529

Jara, J M and Rosenblueth, E (1989), "Variacion de los coeficientes de disefio como


funcion del tiempo", to be published

KiremidJian, A and Anagnos, T (1984), "Stochastic slip-predictable model for earth-


quake occurrence", Bull Seism Soc Am, Vol 74, 739-55

Ordaz, M, et a1 (1989), "Riesgo sismico y espectros de disefio en el estado de


Guerrero",Proc VII Congreso Naciona1 de Ingenieria Estructura1 and VII Congreso
Naciona1 de Ingenieria Sismica, Acapulco, Mexico

Rosenblueth, E and Ordaz, M (1987), "Use of seismic data from similar regions",
Earth Engnrng Struct Dyn, Vol 15, 619-34
Shimazaki. K and Nakata. T (1980). "Time predictable recurrence for large earth-
quakes". Geophys Res Lett, Vol 7. 279-82.
327

Singh, S K et al (1987a), "Empirical prediction of ground motion in Mexico City


from coastal earthquakes", Bull Seism Soc Am, Vol 77, 1862-67

Singh, S K, et al (1987b), "Were the ground motions observed in Mexico City


during the 19 September, 1985 earthquake anomalously large due to a path or a di-
recti vity effect?", Hemor ias VII Congreso Nacional de Ingenieria Sismica.
Queretaro, Mexico, B32-B39

Singh, S K, et al (1988) , "A study of amplification of seismic waves in the Valley


of Mexico with respect to a hill zone site", Earthquake Spectra, Vol 4, 4, 653-74

Suzuki, Sand Kiremidjian, A (1988), "Stochastic slip-predictable model for earth-


quake occurrence", Bull Seism Soc Am, Vol 74,739-55

Vargas, E and Jara, J M (1989), "Influencia del coeficiente sismico de disefio en


el costo de edificios con marcos de concreto", Proc VII Congreso Nacional de In-
genieria Estructural and VII Congreso Nacional de Ingenieria Sismica, Acapulco,
Mexico
RELIABILITY OF STRUCTURAL SYSTEMS WITH
REGARD TO PERMANENT DISPLACEMENTS
J. D. Sf/lrensen & P. Thoft-Christensen
University of Aalborg, Sohngaardsholmsvej 57
DK-9000 Aalborg, Denmark

1. INTRODUCTION
In this paper the problem of estimating the accumulated permanent displacements of an offshore
platform during one storm is considered. For dynamically sensitive structural systems subjected
to wave loads this problem is generally very difficult. However, for dynamic insensitive systems
some methods/experience related to permanent deformations are described in Grinda et al. [1]
and Papadrakakis & Loukakis [2]. For general dynamic systems modelled by one-degree-of-freedom
(and two-degrees-of freedom) systems a number of methods exist, see e.g. Nielsen et al. [3] and
Toro & Cornell [4]. However, for multi-degrees-of-freedom systems very little work (with practical
relevance) has been done.
For steel jacket platforms subjected to wave, wind and current loads with specified main directions
three methods to estimate the permanent displacements during a single storm are proposed, namely
• a simulation approach
• a differential equation approach
• a superposition approach - the simple method
These three approaches are described in S¢rensen & Thoft-Christensen [5], S¢rensen et aL [6] and
S¢rensen & Thoft-Christensen [7]. It is assumed that the structural system can be modelled by a
multi-linear elastic-plastic system and that the loading can be modelled by a stationary Gaussian
Markov process.
In the simulation approach realisations of the load are generated and the permanent displacements
are determined by elastic-plastic analysis of the structural system, see section 2. In the differential
equation approach a system of differential equations is formulated from which the expected value
and the standard deviation of the response (e.g. permanent displacements) can be determined as
a function of the time. Numerical techniques and approximations to solve the system of equations
are discussed in section 3.
These two approaches require a very large number of computer calculations. Therefore a rather
simple method is proposed. The basic idea in the superposition approach is to estimate the
accumulated permanent displacements as sums of permanent displacements from single waves
(this assumption is equivalent to that used in Miner's rule for fatigue analysis). It is described
how a single storm can be broken down into a number of single waves and how the permanent
displacements for each single wave can be determined. Further it is described how the reliability
of the structural system can be estimated.
The three approaches are compared on a qualitative level. Numerical tests are currently being
performed using simple models of offshore platforms. The results of this testing of the simple
method will be published later.
330

2. SIMULATION APPROACH
First the basic structural model is presented and next the modelling of the stochastic external
loading (wind, wave and current) is discussed.
The following basic assumptions are made:
• the structural system is modelled with straight two or three dimensional truss or beam
elements each with two nodes i and j
• the material is linear elastic - perfectly plastic
• the external loads are applied to the nodes
• second-order effects are neglected.

For a single element the stiffness equation on incremental form in local coordinates are

(1)

where
dR e is the increment in the element nodal forces Re (axial force, moments, etc.)

ke is the elastic stiffness matrix of the element


dUe is the increment of the elastic nodal displacements u;e
The plasticity condition is defined at each node of the element. For node no. the plasticity
condition (yield surface) is written

Fi (Re) =1 (2)

Fi < 1 indicates elastic state, Fi = 1 indicates plastic state and Fi > 1 is not possible. If the
flow rule (normality principle) is accepted then increments in the plastic displacements dU P can
be determined, see Bathe [8].
The time-dependent load on the jacket structure consists of wind, wave and current loading and
is modelled by a stochastic vector process (pet), t E [0, Tn where T is the duration of a storm
and Pi(t), i = 1, ... ,N models the load in the ith degree of freedom. N is the total number of
global degrees of freedom in the finite element modelling of the structural system. The stochastic
processes {Pi(t)} , i = 1, ... ,N are assumed to be filtered Gaussian processes and are written

pet) = !
Po + C ((J(t)) (3)

(4)

where

C(iw) = Co + C 1 (iw) + ... + C m (iw)m (5)

b(iw) = bo + b1 (iw ) + ... + bn (iw)" (6)

where
331

is a deterministic time-dependent vector modelling the mean value of


the load process (e.g. a current load)
is an auxiliary stochastic vector of dimension M

Co, ... ,Cm are matrices with constant elements (dimension N x M)

bo, ... ,bn are matrices with constant elements (dimension M x M)


{B;(t)}, i = 1, ... ,M are independent unit intensity Wiener processes with the incremental
properties.

The elements in the b and C matrices are determined so that the actual cross-spectral densities of
the load process are adapted as closely as possible to those of the Gaussian process {pet)} defined
by (3) - (4). If the load process mainly models wave forces determined by Morison's equation then
m = 1 can be expected to give reasonable results.
The permanent deformations during a single storm can be simulated by generating realisations of
{B(t)} and using a non-linear finite element program.

3. STOCHASTIC DIFFERENTIAL EQUATIONS


In this section it is described how the response of the elastic-plastic system with the load models
described in section 2 can be determined by using a differential equation approach. The following
system of equations is formulated

dX(t) = A(X,t)dt + D(t)dB(t) (7)

where X is the state vector of dimension N x • A(X,t) is the drift vector and D(t) of dimension
(N", x M) is the diffusion matrix.
Based on the modelling in section 2 the state vector is

(8)

where U is the global generalized displacement degrees of freedom (dimension N), U is the glo-
bal generalized velocities (dimension N), uP
is the permanent deformations of all element d.o.f.
assembled in one vector (dimension (N E x N DO F) where N E is the number of elements and
N DO F is the number of degrees of freedom in each element), He is the element nodal forces (in
the local element coordinate system) assembled in one vector (dimension (N E x N DOF» and Q
is an auxiliary vector (dimension M).
The dynamic behaviour of the structural system is assumed to be described by

=..::.. =-'- - - -P -
MU+CU+R(U,U )=p (9)

where M is the mass matrix (lumped or consistent) (dimension (N x N», C is the damping matrix
(dimension (N x N» and H is the non-linear restoring force vector (dimension N). If the structural
system is dynamically insensitive then the system is reduced by deleting the inertia and damping
terms.
332
The drift vector A (X, t) can now be written

HU
-p.

A(X,t) =
K U (10)
Q
Q

=-1 - _ =...!... = (n - 1)
- bn (bo Q + b1 Q + ... + bn - 1 -Q-)

where
R; is the local restoring force vector in element i.
T; is the tranformation matrix for element i from local to global coordinate system.
H is a matrix obtained by assembling the element H; matrices.
=p
K is the global modified stiffness matrix given by

(11)

- - =p
Since R;, H and K are non-linear functions of X the drift vector is non-linear. The time-
independent diffusion matrix D(t) is modelled as

(12)

Thus the dimension of the state vector X is

N z =2N+2NE·NDOF+n·M (13)

Even for small structural systems the number of equations can be very large, typically larger than
2000.
The unknown joint density function fX<X, t) of the state vector can in principle be determined by
solving the associated Fokker-Planck-Kolmogorov equation. However, due to the non-linear drift
vector and the large number of equations this is impossible in practice, at least with the present
computer resources.
333

Instead, one can try to estimate the statistical joint moments of the state variables based on a
closed set of differential equations (see Nielsen et al. (3)). It follows from (3) that the expected
value and the covariances can be determined from

p; = E[A;) i = 1, ... ,N" (14)

= ==:!I'
k;j = E[A; (Xj - E [Xj))) + E[Aj (X; - E[X;))) + DD ,i,j = 1, ... ,N" (15)

where
It; is the ith element of E [X)
E [.) is the expectation operation
/'i,;j is the i,jth element in the covariance matrix E [(X - E [X)) (X - E [X))T)
The initial values of Ii and K are assumed to be given

Ii(t = 0) = Jil (16)

~(t = 0) = ~ (17)

If the joint density function of X is described completely by the second moment characteristics
then the system of equations (14) - (15) is closed and can be solved numerically.
As described above the structural elements are assumed to be elastic-plastic. This implies that
the nodal forces He will be bounded. Therefore, they will not be Gaussian distributed and the
Gaussian closure method can thus only be expected to give approximate results. The estimates of
the statistical joint moments can be improved by using other distribution functions. If these are
completely described by the second moment characteristics it is necessary to enlarge the system
of moment differential equations (14) - (15). This possibility is discussed in (3), but for practical
applications to large systems it is not possible to use that approach.
The total number of differential equations corresponding to (14) - (15) will generally be very large
but can be significantly reduced if some of the elements in the state vector are assumed to be
uncorrelated. Other simplifications can also be discussed. The main output from the solution of
(14) - (15) is the time-dependent expected values and covariances (variances) of the permanent
deformations of some critical points.
One solution possibility is to approximate the behaviour of the dynamic system using only a few
eigenmodes. Approximation of the response by the eigenmodes is well known in linear elastic
dynamic analysis. For elastic plastic structures this approach has only been investigated in a few
papers. In Baber (9) modal analysis is used to analyse the response of hysteretic frames. It is
concluded in (9) that the computational effort has decreased significantly, but that it is necessary
to include several modes in addition to the dominant modes corresponding to a linear analysis.
The main reason for this is that the system non-linearities cause interaction between the modal
responses. Further, Baber (9) concludes that elimination of too many modes may have the effect
that the iterations at each time step diverge.
Another possibility to reduce the number of equations in (14) - (15) is to identify the structural ele-
ments which can be expected to remain elastic (or which can be approximated by elastic elements).
The elements in uP
(and in X) coresponding to the elastic elements can then be deleted. Also the
corresponding elements in K can be deleted because the restoring forces in the elastic elements
334

can be determined directly from the nodal displacements. These approximations are generally
non-conservative because the permanent deformations in the elements are underestimated.
A third possibility to reduce the number of unknowns in (14) - (15) is to neglect the time variability
of some of the expectations or covariances, e.g. by assuming some of the variables fully correlated or
independent. The expectations and covariances (variances) of some of the permanent displacements
are the most interesting quantities. Therefore, it should be possible to neglect the time- dependence
of some of the other quantities.

4. THE SIMPLE METHOD


In this section the so-called simple method is described. The main approximations in the simple
method are
• the sequence effects are neglected
the order in which the waves arrive is not taken into account (for example the importance
of wave groups)
the influence from accumulated damage is neglected, i.e. a wave arriving when the structural
system is already severely damaged is assumed to have the same importance as one arriving
at the start of the storm
• dynamic effects are neglected
• the influence from current is not taken into account
Failure is defined as the event that the permanent displacement of a given point of the structure
exceeds some critical value. Such a failure mode is called a failure element, and the structure will
in general have several failure elements of this kind.
The main idea of this method is to estimate the accumulated permanent deformations as sums of
permanent deformations from single waves. This assumption is equivalent to that used in Miner's
rule (stochastic fatigue analysis). Dynamic effects cannot be included in this method because time
effects are not dealt with directly.
The first step is to break a single storm down into a number of single waves. The waves are divided
into N H groups where each group is characterized by
[Hi-I, H i [ the interval containing the wave heights in this group (Ho = 0), i = 1, ... , NH
Ni expected number of waves with wave heights in the interval [H i - 1 ,H;[
Di the accumulated permanent deformation from one wave with the height tcHi-l + H;)
at some critical position (e.g. the top of the platform)
A safety margin corresponding to the critical permanent displacement at the critical position can
then be formulated as
NH
M=Dc-ZDLNiDj (18)
i=1

where
Dc is the critical value of the accumulated permanent deformations,
ZD is a model uncertainty (stochastic) variable
During one storm the sea surface elevation is assumed to be modelled by a stationary Gaussian
stochastic process {1](t)} with zero mean and spectral density function S'1(w). S'1(w) is assumed to
be defined by the significant wave height Hs and the zero crossing period T z •
335

If T is the length of the storm then the expected total number of waves is TITz • To determine
Ni, i = 1, ... , NH (the number of waves with wave heights in the interval [Hi-I, HiD the density
function fH(h) of wave heights H is necessary, see figure 1.

17(t)

Figure 1. Definition of wave height.


The joint distribution function of a peak of the value P at the time tl and a valley of the value V
(not necessarily the succeeding one) at the time t2 = tl + r is given by, see Madsen et al. [10]

!Pv(p, v, r) =

1~00 1000 -XlX2f"'''2~1 ~2;;';;2(P, v, 0, 0, Xl, X2) dx l dx 2 (19)


1~00 1000 -XlX2f~1~2;;' ;;2(0,0, Xl, X2) dx l dX2

where f"1"2~1~2;;';;2 is the joint density function of 771, 772, ~1~2' ~l and ~2. Further, '11 = 77(tl) and
772 = 77(t 2 ) = 77(tl + r). The derivatives ~ and ~ are assumed to exist.
In order to estimate the density function of the range between two successive extremes it is also
necessary to estimate the density function of the times Tl between successive extremes, fTl (t).
This first-passage problem cannot in general be solved analytically. A simple estimate of fTl (t)
can be obtained by using the upper bound (the crossing rate)

(20)

The approximation (20), which can be calculated numerically, is used in the interval 0 ~ t ~ To,
where To is determined from a normalization condition. fT,(t) = 0 when t > To.
Using (19) and (20) the density function of wave heights can now be determined

(21)

and it then follows that

Ni = -T
Tz
l
H'_l
H
' fH(h)dh (22)

Consider the case where all quantities except the sea surface and the model uncertainty variables
D z are deterministic. The permanent displacements from a single wave (at the start of the storm)
with wave height t(Hi-l + Hi) can be determined using a non-linear finite element program and
336

a wave loading program, e.g. the RASOS program developed during the BRITE P1270 project,
see Gierlinski (11). The probability of failure can then be estimated.
Let the stochastic variables (for example yield stresses, model uncertainties, load parameters in
Morison's equation and quantities in the member model) be denoted Y = (Y1 , ••• , Yn). Then Di
will be a function of Y. If the stochastic variables Y also include quanti ties defining the sea surface
process (for example the significant wave height) then Ni will also be dependent on Y.
With only one failure element in the structural system the probability of failure can be determined
from

PI =P(M:::; 0)
NH
= PeDe - ZD LNi(y)Di(Y):::; 0)
i=1

= f NH
PeDe - ZD LNi(Y)Di(Y):::; 0 I Y
i=1
= ii)f¥(ii)dii
(23)

The conditional probability of failure PIW(ii) in (23) can be determined as described above. Nu-
merical determination of the multi-dimensional integral in (23) is generally very computer time
consuming, but the computational efforts can be reduced significantly by using the fast probability
integration technique based on FORM/SORM, see Wen & Chen (12). A systems reliability index
can be determined if the failure modes are modelled as elements in a series system.

T/I.lJ

,. t

Figure 2. Two possible blockings of waves.

In the above model no influence of the sequence effects is taken into account. In the following some
simple methods to take these effects into account are discussed. In figure 2 two possible groupings
337

of the waves are shown. The idea is to group all waves with wave height in one interval, e.g.
[Hi-I> Hi[ into one group of waves containing Ni waves. One "extreme" situation is first to consider
the smallest waves and next the increasing wave heights. Another "extreme" is first to consider
the largest waves and next the decreasing wave heights. For each of the two extreme situations
the permanent displacements are still estimated on the basis of single waves, but the accumulated
displacements at the end of one group of waves are used as input to the structural analysis of
the characteristic wave of the next block. These models will not increase the computational work
significantly.
When the largest waves are considered first an extension of the above method is to perform a
complete structural analysis corresponding to the whole group of the largest waves. The second
largest waves may also be analysed in the same way. The remaining waves are treated as above.
This method will increase the computational work significantly. However, it can be expected to
give a much better estimate of the accumulated permanent deformations.
The simple method is at present being tested at the University of Aalborg.

5. ACKNOWLEDGEMENTS
This paper represents part of the results of BRITE Project P1270, "Reliability Methods for Design and Operation
of Offshore Structures". This project is funded though a 50% contribution from the Directorate Genaral for Science,
Research and Developement of the Commission of the European Communities and another 50% contribution from
the partners and industrial partners. The partners in the project are: TNO (NL), d'Appolonia (I), RINA (I),
Snamprogetti (I), The University of Aalborg (DK), Imperial College (UK), Atkins ES (UK), Elf (F), Bureau Veritas
(F), CTICM (F) and IF REMER (F). The industrial sponsors in the project are: SHELL Research (NL), SHELL
UK (UK), Maersk Oil and Gas (DK), NUC (DK) and Ramb0l1 & Hannemann (DK).

6. CONCLUSION
Methods to estimate the permanent deformations during one storm are described, namely simula-
tion, a differential equation approach and the so-called simple method. In the differential equation
approach the second-order moments of the time-dependent behaviour of the permanent defor-
mations are estimated. The basic idea in the simple method is to consider single waves and to
accumulate linearly the permanent displacements from these. It is described how the magnitude
and number of single waves in one storm can be determined. The permanent displacements from
one single wave are assumed to be determined using a general non-linear finite element program.
Some extensions/improvements of the simple method taking into account the sequence effects are
discussed. One idea is to consider the groups of basic waves sequentially and to use the accumulated
permanent displacements after one group as starting values for the analysis of the basic wave which
represents the next group of waves.
The main drawbacks and advantages of the simulation approach are:
Drawbacks: • very computer time consuming and • expensive to include other stochastic variables
than those modelling the load process.
Advantages: • load process can be modelled rather precisely and • elastic plastic structural systems
can be modelled accurately.
The main drawbacks and advantages of the differential equations approach are :
Drawbacks: • large number of differential equations, • very computer time consuming, • expensive
to include other stochastic variables than those modelling the load process and • brittle structural
elements cannot be modelled.
338

Advantages: • load process can be modelled rather precisely, • elastic plastic structural systems
can be modelled accurately, • the differential equations which model the permanent deformations
are exact (with respect to the assumptions) and describe the time-dependent behaviour exactly
and • dynamic effects can be included.
The main drawbacks and advantages of the simple method are :
Drawbacks: • the estimates of the permanent displacements are generally rather inexact.
Advantages: • not very computer time consuming, • it is possible to include other stochastic vari-
ables than those modelling the load process using a FORM/SORM approach, • brittle structural
elements can be modelled and • elastic plastic structural systems can be modelled accurately.
Compared with the simple method to estimate the permanent displacements the differential equa-
tion approach has the advantage that sequence effects can be taken into account. Compared with
simulation the differential equation approach has the advantage that it is possible to incorporate it
in a FORM/SORM analysis. The main advantage of the simple method compared with the other
methods is that it is not very computer time consuming, i.e. it is practically applicable. However,
test of the accuracy of the simple method has not yet been finished.

7. REFERENCES
[1] Grinda, K.G., W.C. Clawson & C.D. Shinners: Large-Scale Ultimate Strength Testing of
Thbular K-braced Frames. OTC paper 5832, 1988, pp. 227-236.
[2] Papadrakakis, M. & K. Loukakis : Inelastic Cyclic Response of Restrained Imperfect Colo-
umns. ASCE, Journal of Engineering Mechanics, Vol. 114, No.2, 1988, pp. 295-313.
[3] Nielsen, S. R. K., K. J. M¢rk & P. Thoft-Christensen: Stochastic Response of Hysteretic
Systems. Structural Reliability Theory, Paper No. 39, The University of Aalborg, 1988. To
be published in Structural Safety.
[4] Toro, G.R. & C.A. Cornell: Extremes of Gaussian Processes with Bimodal Spectra. ASCE,
Journal of Engineering Mechanics, Vol. 112, No.5, 1986, pp. 465-484.
[5] S¢rensen, J.D. & P. Thoft-Christensen: Estimation of Permanent Displacements by Simu-
lation - Part II. Report B(III.2)3, BRITE Project P1270, University of Aalborg, 1988.
[6] S¢rensen, J.D., P. Thoft-Christensen & S.R.K. Nielsen: Estimation of Permanent Dis-
placements by Differential Equation Approach. Report B(III.2)4, BRITE Project P1270,
University of Aalborg, 1988.
[7] S¢rensen, J.D. & P. Thoft-Christensen: Estimation of Permanent Displacements by the
Simple Method. Report B(III.2)6, BRITE Project P1270, University of Aalborg, 1989.
[8] Bathe, K.-J. : Finite Element Procedures in Engineering Analysis. Prentice-Hall, 1982.
[9] Baber, T. T.: Modal Analysis for Random Vibration of Hysteretic Frames. Earthquake
Engineering and Structural Dynamics, Vol. 14, 1986, pp. 841- 859.
[10] Madsen, H. 0., S. Krenk & N. C. Lind: Methods of Structural Safety. Prentice-Hall, 1986.
[11] Gierlinski, J.T. : RASOS : Reliability Analysis System for Offshore Structures. BRITE
Project P1270, Atkins ES, 1990.
[12] Wen, Y. K. & H. C. Chen: On Fast Integration for Time Variant Structural Reliability.
Probabilistic Engineering Mechanics, Vol. 2, 1987, pp. 156-162.
RELIABILITY OF CURRENT STEEL BUILDING
DESIGNS FOR SEISMIC LOADS

Y. K. Wen, D. A. Foutch, D. Eliopoulos & C. - Y. Yu


University of Illinois at Urbana-Champaign
205 N. Mathews, Urbana, IL, 61801, USA

ABSTRACT
This is a progress report of a research project which evaluates the performance and safety
of buildings designed according to the recently proposed procedures such as Uniform Building
Code (UBC) and Structural Engineers Association of California (SEAOC). The extensive results
from recent analytical studies of structural behavior, laboratory tests of structural and nonstructural
components and field ground motion and damage investigations form the data basis for this study.
State-of-the-art reliability methods are used. The study concentrates on low-to medium-rise steel
buildings. Both time history and random vibration methods are used for the response analysis.
Limit states considered include maximum story drift, damage to nonstructural components and
content, and low cycle fatigue damage to members and connections. The risks implied in the
current procedure, for example those based on various Rw factors for different structural types,
will be calculated and their consistency examined.

INTRODUCTION
The commonly accepted philosophy in design of a building under seismic loads is to ensure
that it will withstand a minor or moderate earthquake without structural damage and survive a
severe one without collapse. To implement this design philosophy successfully, however, one has
to take into consideration the large uncertainties normally associated with the seismic excitation
and the considerable variabilities in structural resistance because of differences in structural type
and design and variability of material strength. This has not yet been done in current practice in
building design, although the need for consideration of the uncertainty involved has long been
recognized, especially in design of nuclear structures.
A large amount of knowledge has been accumulated from experience on building
performance in recent earthquakes, such as those in Japan, Mexico, and this country; also,
considerable progress has been made in the structure reliability research. In view of these
developments, the objective of this research is, therefore, to evaluate the performance and safety
of buildings designed according to the recently proposed and adopted procedures; namely, the
provisions recommended by the Structural Engineers Association of California (SEAOC) and
Uniform Building Code (UBC). Emphasis is on the realistic modeling of the specific buildings,
340

the nonlinear inelastic behavior of such design, and the quantification of the effect of the large
uncertainties in the seismic excitation. The tasks required for this research are summarized in the
following.

(1) Selection of Site and Risk Analysis. Two sites are considered for the study of building
response, both in Southern California. One of these is close to a major fault and the other one
is at some distance from it (see Fig. 1). The potential future earthquakes that present a threat to
the site are characterized as either characteristic or non-characteristic. The former are major
seismic events which occur along the major fault (Fig. 1) and with relatively better understood
recurrence time behavior [1], the latter are minor local events that their occurrences collectively
can be treated as a Poisson process [2,3]. The major parameters of the characteristic earthquake
for risk analysis are recurrence time, magnitude, epicentral distance to the site and attenuation,
whereas those for non-characteristic earthquakes are occurrence rate, local intensity, and duration.
They are treated as random variables and used as input to the random process ground motion
model.

(2) Modelin!: of Ground Motion. The ground motion model is that of a nonstationary random
process whose intensity and frequency content vary with time [4]. This model allows straight-
forward identification of parameters from actual ground accelerograms, computer simulation of
the ground motion for time history response analysis, and analytical solution of inelastic structure
response by method of random vibration [5]. For the site where actual earthquake ground motion
records are available (i.e., Imperial Valley, California), model parameters are estimated and used
to predict structural response to future earthquakes. For the site where no such records are
available, a procedure has been established to determine the model parameters as functions of
those of the source, i.e., magnitude, epicentral distance, etc. based on information given in [6].
Also, for sites close to the fault, the important directivity effect [7] of the rupture surface is
considered in the ground motion model which is known to affect significantly the frequency content
and duration of the ground motion.

(3) Buildin!: Desim. The proposed study requires the design of six low-rise steel building types.
Using the 1988 Uniform Building Code (UBC) designation, the types of building included in this
study are as follows:
(1) Ordinary moment-resisting space frame (OMRSF)
(2) Special moment-resisting space frame (SMRSF)
(3) Concentric braced frame (CBF)
(4) Eccentric braced frame (EBF)
(5) Dual system with CBF
(6) Dual system with EBF
341

Two types of buildings, OMRSF and SMRSF, are designed in accordance with the 1988
UBC specifications using a computer software developed at University of lllinois, IGRESS2 [8].
Practicing engineers were consulted to ensure that the designs conform with the current design
procedures. Typical floor plan and elevation views are shown in Fig. 2. Only the perimeter frames
are designed to carry seismic loads. The interior frames are designed with pinned connections at
the end of the arch girder. Design loads, member sizes, total weight and Rw values used are
included in Tables 1 to 4. In order to use the higher Rw value allowed for SMRSF, the Code
demands more stringent detailing, e.g., increased control of local buckling of the members and
control of the location of plastic hinge formation. SMRSF is subject to lesser base shear and its
design is more likely controlled by the drift limitation rather than strength requirements. CBF and
EBF buildings are also considered.

(4) Response and Dama2C Analysis. For a specified set of ground motion and structural
parameters, each building is analyzed for a large number of ground motions in order to determine
the statistics of responses and levels of damage. The following response characteristics for each
frame are compared: (1) story drifts; (2) damage ofnonstructural elements; (3) energy dissipation
demand; and (4) damage index. An upgraded version of the well known finite element program
DRAIN 2DX [9] is use for the simulations. This program includes elements that may be used to
model structural elements. The damage of nonstructural element and the maximum story drift are
directly and indirectly related to the damage of a real buildings; nonstructural elements such as
partition walls, cladding, etc. Structural damage appears in the form of local buckling and fracture
around joints. This has been confirmed by the US/Japan cooperative full-scale tests and from
previous studies of hysteretic energy dissipation.
A parallel analysis of response based on a time domain random vibration method [5] is also
carried out. This method gives response statistics of interest in this study such as maximum
displacement and hysteretic energy dissipation. Hysteretic energy dissipation demand and the
number of inelastic excursions are directly related to the damage index. A damage index based
on concept of low-cycle fatigue life for the beam to colunm connection, [10] is developed.
Mer each time history analysis, important response quantities are extracted, used for the
calculation of damage index and stored. The results from these simulations are compared with the
random vibration analysis results. As an example, Fig. 3 shows the response statistic of SMRSF
and OMRSF under future excitations characterized by the ground motion of the Corralitos station
of the 1989 Lorna Prieta earthquake. Only the mean responses are shown and compared with the
design value and response under actual Loma Prieta ground acceleration. The coefficient of
variation of the drift is 40% at the first floor and decreases to 11 % at the roof.
342

(5) Limit State Risk Evaluation. Based on first and second order reliability method, the fast
integration technique [11] for time variant systems, the effect of ground motion and structural
parameter uncertainties are included and the risk per year for a given period of time of the
foregoing limit states of interest will be evaluated. For this purpose, methods based on a response
surface technique [12] are also considered which generally increases the efficiency of the method
when the number of parameters considered is large. This will be especially effective in connection
with the simulation study.

(6) Annraisal of Current Code Procedures. The consistency of current design procedures will
be examined based on the result of the risk analysis. Emphasis will be on identifying implication
of various factors used in the design, in particular, the Rw reduction factor, for different building
types. Also, risks of joints and bracing members against fatigue type cumulative damage will be
examined.

ACKNOWLEDGMENT
This research is supported by National Science Foundation under grant NSF CES-88-
22690. The support is gratefully acknowledged.

REFERENCES

(1) Working Group on California Earthquake Probabilities. "Probabilities of Large Earthquake


Accuracy in California on the San Andreas Fault," USGS Open-File Report 88-398, 1988.

(2) Algermissen, S. T.; O. M. Perkins; P. C. Theuhaus; S. L. Hanson; and B. L. Benden.


"Probabilistic Estimates of Maximum Acceleration and Velocity in Rock in the Contiguous
United States," USGS Open-File Report 82-1033, 1982.

(3) Cornell, C. A. and S. R. Winterstein. ''Temporal and Magnitude Dependence in Earthquake


Recurrence Models," Stochastic Approaches in Earthquake Engineering, US-Japan Joint
Seminar, May 1987, Boca Raton, Florida, U.S.A.

(4) Yeh, C. H. and Y. K Wen. "Modeling of Nonstationary Ground Motion and Analysis of
Inelastic Structural Response," Journal of Structural Safety, 1990.

(5) Wen, Y. K "Method of Random Vibration for Inelastic Structures," Applied Mechanics
Review, Vol. 42, No.2, February 1989.

(6) Trifunac, M. O. and V. W. Lee. "Empirical Models for Sealing Fourier Amplitudes Spectra
of Strong Earthquake Accelerations in Terms of Magnitude, Source to Station Distance, Site
Intensity and Recording Site Conditions," Soil Dynamics and Earthquake Engineering,
Vol. 8, No.3, July 1989.

(7) Singh, J. P. "Characteristics of Near-Field Ground Motion and Their Importance in


Building Design," ATC-10-1, Critical Aspects of Earthquake Ground Motion and Building
Damage Potential, 1984.
343

(8) Ghaboussi, J. "An Interactive Graphics Environment for Analysis and Design of Steel
Structure," IGRESS2, Version 2.0, Prairie Technologies Inc., Urbana, Illinois 6180l.

(9) Kannaan, A E. and G. H. Powell. "Drain-2D: A General Purpose Computer Program for
Inelastic Dynamic Analysis of Plane Structure," Report EERC-73/06, University of
California at Berkeley, April 1973.

(10) Krawinkler, H. "Performance Assessment of Steel Components," EarthQJlake Spectra.


Vol. 3, No.1, February 1987, pp. 27-42.

(11) Wen, Y. K and H. C. Chen. "On Fast Integration for Time Variant Structural Reliability,"
Probabilistic Engineering Mechanics. Vol. 2, No.3, 1987, pp. 156-162.

(12) Iman, R. L. and J. C. Helton. "A Comparison of Uncertainty and Sensitivity Analysis
Techniques for Computer Models" Report NUREG/CR-3904, Sandia National Laboratory,
Albuquerque, New Mexico.

Roof Concrete Slab with Decking 42 psf


Mechanical and Electrical 4 psf
Ceiling 8 psf
Structural Steel and Fire proofing 15 psf
Insulation and Membrane 11 psf
Total Uniform Dead Load 80 psf
Uniform Live Load 20 psf

Floor Concrete Slab with Decking 42 psf


Mechanical and Electrical 3 psf
Ceiling and Floor Covering 10 psf
Structural Steel and Fire proofing 20 psf
Partitions 20 psf
Total Uniform Dead Load 95 psf
Uniform Live Load 50 psf

I f'acade II Cladding on Exterior wall II 30 psf I

Table 1 Design Load


344

STORY C2 C3 C4 Cl C5

5 W24X62 W24X68 W24X68 W27X84 W27XI02

3 - 4 W27X114 W27X146 W27X146 W33Xl18 W30X173

1- 2 W33X221 W33X221 W33X221 W36X230 W36X260

Table 2 OMRSF Column Schedule

STORY C2 C3 C4 Cl CS

S W24X68 W24X76 W24X76 W24x68 W24x76

3 - 4 W27X94 W30xl08 W30xl08 W27x94 W33x130

1- 2 W33X141 W36x150 W36xlS0 W33x130 W36X182

Table 3 SMRSF Column Schedule

OMRSF SMRSF
STROY Gl G2 Gl G2

S W24X76 W30X99 W24XSS W27X94

4 W30X116 W33X130 W24X68 W30XI08

3 W33X118 W33X141 W30X99 W33X118

2 W36X135 W36X170 W30X99 W33X118

W36X135 W36X170 W30X99 W33X118


1

Table 4 Girder Schedule


345

, CENTRAL CREEPING
, SEGMENT

PARKFIELD SEGMENT

SAN BERNARDINO MOUNTAIN


SEGMENT ')

N COACHELLA V ALLEY

1
SEGMENT

o 100 km
I I

Fig. 1 Segments of the Central and Southern Andreas Fault


346

40'

30'

40'

6 @ 30' - 0 = 180'-0

Typical floor plan view of building

o 0
13'
o 0
13'
o 0
13'
o 0
13'
o 0
15'
r--~

0-- pinned connection


Elevation view of seismic frames A and 0

Fig. 2 Floor Plan and Elevation View of OMRSF and SMRSF Studied
347
HF
I
,
5F -L'l L, L_-,
,: :,
,: I :,
,: :,
a:i 4F 1/---- l, :l-,
""
1! I
~
~

...0
t>,
~
~

,,, ,,, :::


3F r - J ~;
--'
CI)

2F - -, SMRSF
,
l ~

I I
I I
1F
O. 1. 2. O. 000. 1600.
Drift ralio envelope (%) Shear envelope (kips)

HF ,, ::
1-, I:
,, ::
1 ....... _ _ "
5F

r:
a:i 4F L, LI-..."
,:
""
1! I ,:
,:
...
,,",
L, 'l,.-,
0
--'
CI)
3F ,, ::
I 1
2F OMRSF ~ ,-,-
I
I
1F
o. 1. 2. o. 000. 1600.
IJrifl ralio envelope (%) Shear envelope (kips)

Fig. 3 Comparison of Predicted Mean Response (.... ) with Design


Value (- -) and Time History Results Based on Actual
Lorna Prieta Accelerogram (---)
JACKUP STRUCTURES
NONLINEAR FORCES AND DYNAMIC RESPONSE

Steven R. Winterstein*, Robert L~seth**


*Department of Civil Engineering
Stanford University, Stanford, CA 94305-4020
** A.S. Veritas Research, P. O. Box 300
N-1322 H~vik, Norway

ABSTRACT

Simple analytical methods are shown for stochastic nonlinear dynamic analysis of offshore jacket
and jackup structures. Base shear forces are first modelled, and then imposed on a linear
lDOF structural model to predict responses such as deck sway. The force model retains the
effects of nonlinear wave kinematics and Morison drag on base shear moments, extremes, and
spectral densities. Analytical models are also given for response moments and extremes. Good
agreement with simulation is found for a sample North Sea jackup. The effects of variations in
environmental and structural properties are also studied.

INTRODUCTION

Jackup platforms have become a standard tool for offshore operations, to water depths of about
100 meters. Recent trends have extended their use to deeper, more hostile environments, such
as all-year operation in the North Sea. This places additional demands on the jackup, whose
horizontal stiffness is typically an order of magnitude less than that of a corresponding jacket
structure. Due to its flexibility, the fundamental period of a jackup may typically range from
3 to 8 seconds. Because there may be considerable wave energy at these frequencies, dynamic
effects should not be neglected.
Simple analytical methods are shown here for stochastic nonlinear dynamic analysis of
jackets and jackups. Base shear forces are first modelled, and then applied to a linear lDOF
structural model to predict responses such as deck sway. The force model retains various nonlin-
ear effects, such as nonlinear wave kinematics and Morison drag. Analytical results are given for
the marginal moments, extreme base shear fractiles and power spectra. These spectra include
the additional high-frequency content induced by nonlinearities, which may increase resonant
response effects. Corresponding analytical models are also given for response moments and
extremes.
Good agreement is found with simulated results for a sample North Sea jackup. The
effects of variations in environmental and structural properties are also shown. Increasing wave
height and integration of particle velocities to the exact surface are found to significantly affect
both gross force levels and the relative contribution of nonlinear, non-Gaussian effects. In
contrast, varying structural properties may have somewhat offsetting effects. Larger periods or
smaller damping yield greater dynamic effects, which generally raise response variance but often
reduce higher response moments. Gaussian models may then underpredict response extremes,
yet overestimate their variation with structural period and damping.
350

LoAD STATISTICS: BASE SBBAR FORCES

The gross effects of waves and current are shown by the total base shear and overturning moment
they cause. We focus here on the base shear F(t) on jackup and jacket structures, predicting
its mean, variance, power spectrum, skewness and kurtosis. The former three quantities are
sufficient for linear response analysis if F(t) is Gaussian; the latter two reflect non-Gaussian force
effects. We conclude this section by using these statistics to estimate maximum base shear levels
and force spectral densities. Entirely analogous techniques may be used to estimate statistics of
overturning moment. In the next section, these force statistics are used to analytically predict
nonlinear effects on structural responses such as deck sway.
Two levels of modelling are considered: conventional state-of-the-art simulation for ran-
dom irregular waves and a new analytical approach. Particle velocities and accelerations below
the mean water level z=o are found from linear wave theory, and "stretched" to the exact surface
through vertical extrapolation of their values at z=O. The nonlinear Morison model is used to
estimate the wave force on each leg per unit elevation; this result is integrated over elevation to
provide the total base shear at any time.

Simulated Wave Force Models


For irregular seas, it is common to specify the power spectral density of the wave elevation '7.
We adopt here the JONSWAP model, with the following two-sided spectral density (per circular
frequency w):

(1)

in terms of the significant wave height H., the peak spectral period Tp, and the spectral peak
factor '"Y. The constant C("()=l- .287In(,,() is introduced to roughly preserve the total spectral
area 0'~=H;/16 (Andersen et ai, 1987). This reference also suggests the following peak factor:

(2)
In the special case where '"Y=1, Eq. 1 gives the ISSC or Pierson-Moskowitz spectrum.
We consider here long-crested waves travelling in the positive x-direction. We seek the
wave elevation '7z(t) as a function of location x and time t, as well as the horizontal wave particle
velocity and acceleration, uz ... (t) and uz ... (t), for various x, t, and elevation levels z. Based on
linear wave theory, these quantities can be simulated as follows (Borgman, 1969):

'7z(t) ) \ COS(tPi) )
\ uz ... (t) = L Ai WiT(Z,Wi) COS(tPi) ; tPi = Wit + 4>i - kix (3)
uz ... (t) i -w;T(z, Wi) sin( tPi)
in terms of independent, uniformly distributed phases 4>i at the fixed frequencies Wi. Note
that the only exogenous information in ~his result is the wave elevation spectrum, S'1(w), The
amplitudes Ai are chosen from this spectrum as

(4)
The wave number ki and transfer function T(z, Wi) corresponding to frequency Wi in Eq. 3 are
found by solving
2 k h(k d) T( .) _ coshki(z + d) (5)
Wi = g i tan i; z, W, - sinh(kid)
351

The first result here is the dispersion relation, which must generally be inverted numerically
to find Ie; from Wi' The sums in Eq. 3 are typically evaluated with the Fast Fourier Transform
("FFT"), although increased efficiency may be gained through the real-valued Fast Hartley
Transform, or "FHT" (Winterstein, 1990). This reference also shows FHT simulation of non-
Gaussian sea surfaces. '
Note that different wave components in Eq. 3 travel at different speeds. For relatively deep
water, Eq. 5 gives )../T2=g/(27r)=1.56 m/s2. Various wave components may thus be reinforced
or cancelled, depending on the relation between their wave lengths and structural dimensions.
For example, resonant jackup response may be due to 5-6 second waves, with lengths (40-60
m) comparable to typical leg spacings. In contrast, the leg spacing may be comparable to the
half-wavelengths of longer, 7-9 second waves. The net force due to these slower wave components
may then be reduced, due to their incoherence.
Finally, the total base shear F(t) is found by applying the Morison force model to each
leg:
(6)

in terms of the drag and mass coefficients, kd and km' the particle velocity u and acceleration Ii
from Eq. 3, and the current velocity uo. Some formulations use Eq. 6 with the relative particle
velocity with respect to the structure; this is not done here. Above the mean water surface
(z > 0), we employ constant stretching of the linear wave theory values at z=O:

(7)

Unlike simulation with linear kinematics, proper stretching requires that we explicitly
simulate the elevation T/ in Eq. 3, along with particle velocities and accelerations. While FFT or
FHT simulations yield entire histories of these quantities simultaneously, the force integration
in Eq. 6 must then be performed numerically at each time step. The associated numerical costs
motivate the need for analytical models, such as those considered below.

Analytical Wave Force Models


Simulation with Eq. 3 serves here as a basis for comparison with simpler analytical models. In
particular, we adopt here a convenient narrow-band wave model (Haver, 1990), replacing Eq. 3
by
T/z(t) ) { cos(t/I(t)) )
{ ~z,z(t) = A(t) wp;(z, wp) co~(t/I(t)) ; t/I(t) = wpt + 4>(t) - kpx (8)
uz,z(t) -wpT(z, wp) sm(t/I(t))
Here A(t) and 4>(t) are instantaneous amplitude and phase processes, assumed to remain rel-
atively constant while the wave passes over the structure. Thus, we consider the wave profile
over the structure at time t-and the resulting base shear-as a function of only two random
variables, A and 4>. This base shear function, F(A,4», is found by combining Eqs. 5-8. The
result may be expressed analytically in some cases (Haver, 1990).
Eq. 8 uses only the peak frequency wp=27r /Tp and resulting wave number kp from Eq. 5. It
therefore fails to properly reHect the full spectrum of wave lengths and periods. It does preserve
their total wave power, however, by taking E[A 2 ]=H;/8. Further, by assigning Rayleigh and
uniform distributions to A and 4>, respectively, we preserve the Gaussian marginal distribution
of T/ consistent with linear wave theory. By weighting over these distributions of A and 4>, the
352

(9)

Thus, the wave force and kinematics models are reflected through the base shear force,
F( a, 4», for regular waves with various amplitudes a and phases 4>. The number of force evalu-
ations needed can often be reduced through numerical quadrature. For example,
N N
E[Fm] Ri LLp;pj[F(a;j,4>;j)]m j a;j = 0.25H.J€1 + €1, 4>;j = tan- 1 (€jj€;) (10)
;=lj=l

in terms of the N Hermite quadrature points 6, ... , €N and the corresponding probability
weights p;=N!j[N HeN_1(€;)]2. The values €; can be found by finding all roots of HeN(€;)=Oj
alternatively, various sources have tabulated these values for different N.
Finally, note that Eqs. 9-10 can also use other force and kinematic models, leading to
different forces F(a,4». For example, linear kinematics can be extended with Wheeler, delta
or other stretching models (Gudmestad and Spidsoe, 1990). For models with non-Gaussian sea
surfaces, however, it may be more difficult to assign distributions fA(a) and f~(4)) consistent
with an observed spectrum S,,(w).

Numerical Base Shear Results


We consider here a North Sea jackup (L¢seth et ai, 1990), whose size and water depth suggest
marked dynamic effects. The water depth is 83.6m, and the platform deck level is 119m above
the sea bed. In the along-wave x direction, the three jackup legs are located at x=O, 0, and
58.6m. Equivalent tubular members, with Morison coefficients Cm =2.0 and C d =2.5 and radius
r=1.25m, are used to reflect hydrodynamic loads on each of the lattice legs with racks and
marine growth.
The JONSWAP wave spectrum in Eq. 1 is used with H.=11.5m, Tp =12.5s, and peak
factor /,=4.5 from Eq. 2. These values are intended to be roughly typical of a North Sea
seastate during an extreme summer storm. (Future studies will weight various (H., Tp) pairs by
their relative frequency, to include smaller (H., Tp) events that may produce greater resonant
effects.) Combining tidal and wind-driven current, a constant current velocity of uo=1 mjs is
used in the base case.
Force Moments. Because we shall represent the base shear through its first four moments
only, we first consider how well these moments are predicted from Eq. 9. Figure 1 shows that
for various wave heights H. and current velocities uo, these analytical predictions agree quite
well with FFT simulation results.
Gross force levels, reflected by the mean fflp and standard deviation Up, increase with
wave height H., with current Uo, and with integration to the exact surface. Notably, the relative
effects of nonlinearity-shown through the force skewness Qsp and kurtosis Q4F, and neglected
by Gaussian models-also grow with wave height and integration to the exact surface. Nonlinear
effects grow less systematically with current. As Uo grows, the nonlinear Morison drag force in
Eq. 6 becomes approximately linear: kd(U + uo)lu + uol Ri kdluol(uo + 2u).
Force Extremes. Reliability applications typically require additional force statistics, such
as full-distribution information, extremes, and spectral densities. These are conveniently es-
timated with four-moment Hermite models, which relate the base shear F(t) to a standard
Gaussian process U(t) as follows (Winterstein, 1988j Winterstein and Ness, 1989):

(11)
353

JACKUP FORCE MOMENTS


T p =12.5s, d=83.7m, 1=58.6m, r=1.25m

ANALYTICAL:
~ SIMULATION (± 10-) ---- Mean Surface
~ - Exact Surface
Z
::!1 1.5 1.5
L...-I

r::ttS 1.0 1.0


Q)
::!1 0.5 0.5

Z 1.5 1.5
::!1
L...-I
1.0 1.0
.....blI
U'J. 0.5 0.5

rn
rn 1.5 1.5
Q)
r::
~ 1.0 --- -------
III 1.0
Q)
~
U'J.
0.5 ~ 0.5
0.0 0.0

.....rnrn 8 8
0
~ 6 6
s...
:;$
~ 4 4
4 6 8 10 12 14 0 0.2 0.4 0.6 0.8 1
Wave Height [m] Current [m/s]
]figure 1: Base shear moments for various wave heights and current.
354

....
fI)
EXTREME BASE SHEAR ANALYTICAL:
I 2-moment Gaussian
;:J
o SIMULATION (± la)
..c: - - 4-moment Hermite
(')

.5 II
~12.5
z
6
Q) 10.0
....o
~
OJ 7.5
E
....OJ
~ 5.0
><
OJ
~

:a'"OJ 2.5

:::;!
6 B 10 12

Significant wave height. H. [m]

Figure 2: Extreme base shear in 3-hour seastate.

in which c4=h/1 + 1.5(a4F - 3) -1)/18, c3=a3F/(6 + 36c4), and K.=I/Jl + 2c~ + 6c!. Alterna-
tive models are available for kurtosis values a4F < 3, although such cases are not encountered
here.
The p-fractile extreme force in period T can be estimated from Eq. 11, taking U as the
corresponding Gaussian extreme estimate:

_ [ ( nT )]1/2 (12)
Up - 2ln To In(l/p)

Here To is the average period and n=1 or 2 for maxF(t) or maxIF(t)l, respectively. Figure 2
shows median estimates of maxF(t) in a typical seastate of duration T=3 hours, using Eq. 12
with n=l, p=.5, and To=Tp =12.5s. The Hermite results use Eqs. 11-12 with all four analytical
force moments estimates; the Gaussian model uses only the first two force moments and sets
C3=C4=0 in Eq. 11. (Only stretched kinematics to the exact surface have been considered in
Figures 2-5.)
Note that although the force mean and standard deviation have been accurately estimated
(Figure 1), a Gaussian model based on these values underestimates extreme 3-hour forces by
roughly 50% in the large-H, seastates that govern design. The Hermite models use the extra
force skewness and kurtosis information to produce markedly improved results. In view of the
accuracy in all four predicted force moments (Figure 1), the roughly 10% conservative bias in
4-moment extreme estimates is due to the Hermite model (Eq. 11). From systematic study of
various nonlinearities, this error lies within the scatter of various nonlinearities with common
first four moments (Winterstein and Ness, 1989). Moreover, the Hermite model is often found
conservative with respect to various actual nonlinear mechanisms.
Force Spectral Densities. Finally, Eq. 11 implies the following simple relation between the
correlation functions of F(t) and the underlying Gaussian process U(t) (Winterstein, 1990):

(13)
The underlying Gaussian correlation function pu(r) may be taken from an equivalent linear
force model, for example. The nonlinear terms P'b and p~ show the increased power at two and
three times the principal frequency, respectively, induced by the nonlinearity.
Figure 3 shows corresponding spectral densities, based on the Fourier transform of Eq. 13.
For simplicity, the power spectrum of U(t) is taken as the wave elevation spectrum in Eq. 1.
355

FORCE SPECTRAL DENSITY - FFT Simulation


Models: ......... 2-momenl Gaussian - - 4-momenl Hermile
10 14
N
:r:
~IOI3
3

..
El
210 12
()
cu

...«I~101l
cu
'ii 1010
cu
rn
«I
o:l
109
0 0.1 0.2 0.3 0.4 0.5
Frequency, f [Hz]

Figure 3: Base shear spectral density.

The Hermite model based on four analytical moments again shows better agreement with sim-
ulation than the Gaussian model. Although the wave spectrum is rather narrow (-y=4.5), both
simulation and Hermite model show rather weak modes at the higher harmonics f=2/Tp =.16 Hz
and 3/Tp =.24 Hz. There is a general increase both in low- and high-frequency power, however,
not reflected by the wave elevation spectrum-perhaps a factor of about 3 in high-frequency
spectral ordinates.

RESPONSE STATISTICS: THEORY

In this section, we seek similar moment-based Hermite models of dynamic responses. We fo-
cus here on the deck sway response, from which local member stresses can be estimated. The
necessary four response moments for these models can be estimated in various ways, includ-
ing systematic non-Gaussian closure (Winterstein and Ness, 1989) and time-domain simulation
(Ll2!Seth et al, 1990). (By coupling simulation with Hermite models of extremes, the simulation
duration and cost may be reduced because only a limited number of response moments need be
reliably estimated.) These techniques permit arbitrary nonlinear structural behavior, as well as
Morison drag forces based on relative velocities.
We consider here simpler cases of linear structures under non-Gaussian loads. Various
specialized techniques can be applied in these cases to estimate response moments. Recursive
moment relations can be used (Grigoriu and Ariaratnam, 1987; Krenk and Gluver, 1988), if the
force is modelled as a functional transformation of a filtered white noise process. Alternative
closed-form moment results, which permit a force with arbitrary spectral density, follow from
a separable response cumulant model (Koliopulos, 1987). Due to its convenience, we use this
separable model here.
Working in the time domain, we describe the force by its correlation function pp(l') and
the linear structure by its impulse response function h(l'). The response mean and variance, mx
and Uk, are related to those of the force as follows (Lin, 1976):

mx =
mp
roo h(l')dl'; U~ = 10roo h(l')Q(l')dl' where Q(l') = 10roo h(U)pp(l' _ u)du
10 Up
(14)

This result for Uk requires a double integration, or equivalently, a single integration once
the inner integral for Q(l') has been found for various lags r. Significantly, once Q(l') has
356

been found, additional single integrals yield higher-order moments with the separable model
(Koliopulos, 1987):
Qsx 1;0 h(r)Q2(r)dr Qu- - 3 10 h(r)QS(r)dr
00
(15)
Qsp = [/;0 h(r)Q(r)dr]S/2; au - 3 = [10 h(r)Q(r)dr]2
00

Once obtained, these moments can be used to estimate response extremes and power spectra as
in Eqs. 11-13.

RBsPolfSB STATlSTICS: N1JMBllICAL RBsuLTS

We adopt here a one-degree-of-freedom (lDOF) structural model, so that h(t) = exp( -!:"wnt)
sin(wcit) /(mwci) in terms of the natural frequency W n , damping ratio !:", and damped frequency
Wci=Wn~. Note that the foregoing results apply equally to multi-degree-of-freedom linear
systems, in which h(t) is a linear combination of 1DOF modal impulse responses.
Figure 4 shows deck sway moments for various natural periods, with 1% damping and
modal mass m=21x10s tonnes (including both deck and legs). Analytical results are based on
Eqs. 14 and 15, with force correlation function PF(r) from Eq. 13. Both the response mean and
variance grow with Tn, the former due to decreasing stiffness and the latter due to increased
resonance. At the same time, however, as Tn grows the non-Gaussian force is more effectively
"averaged" by the structure, leading to reduced higher response moments. Thus, dynamics may
somewhat lessen nonlinear/non-Gaussian effects due to force nonlinearities.
Similar offsetting effects occur with various damping levels: the response variance in-
creases with decreasing damping, but the higher moments decrease at the same time. As a
result, extreme responses may vary less rapidly with structural properties, such as period Tn or
damping!:", than the Gaussian model predicts. This is shown by Figure 5, which shows predicted
and simulated response extremes versus damping ratio for Tn =6.59s, the estimated period of
the jackup under study. While the Gaussian model generally underestimates response extremes,
it overestimates their variation with ,. This suggests that in predicting extreme response, the
choice of damping value may be less significant than the Gaussian model implies.

CONCLUSIONS

• In addition to gross force levels, the relative contribution of nonlinear, non-Gaussian effects
on base shear grow with wave height and integration to the exact surface. (Figure 1).
• Base shear moments are accurately predicted by an analytical narrow-band model, which
requires a double integration over amplitude and phase (Eqs. 9 and 10). Resulting base
shear extremes and power spectra are accurately estimated from these moments with non-
Gaussian Hermite models (Eqs. 11-13; Figures 2 and 3).
• For linear jackup structures, either 1DOF or MDOF, convenient analytical results have
been shown for response moments (Eqs. 14 and 15). These agree well with simulation (Fig-
ure 4), and their analytical form makes them useful when combined with outer integration
over random environmental variables.
• Resonant effects lead to increasing response variances with decreasing damping or increas-
ing period Tn (approaching the wave period). Response extremes may grow less rapidly,
because dynamic effects may somewhat reduce nonlinear/non-Gaussian effects due to force
nonlinearities. Gaussian models may then underpredict response extremes, yet overesti-
mate their variation with structural period and damping.
357

JACKUP RESPONSE MOMENTS


Tp =12.5s. d=83.7m. 1=58.6m. r=1.25m
ANALYTICAL:
i SIMULATION (± lu) ---- Mean Surface
- Exact Surface

~ 0.08 0.08
I::: 0.06 0.06
CIl
V
::;:: 0.04 0.04
0.02 E---...-=t:.:..:..------------::l 0.02
Z 0.3 0.3
S 0.2 0.2
OIl
in 0.1 0.1

fIl
fIl
V 0.4 0.4
I:::
~ 0.2 0.2
V
.:.:
rn 0.0 F----------=-======::::!=~ 0.0
4.5 4.5
·m0
fIl
4.0 4.0
...., 3.5 3.5
~
;j 3.0 ~!:==::=~~=*=~ 3.0
:::.::
3 456 ?
Structural natural period [s]

Figure 4: Deck sway moments for various natural periods.


fIl
s...
;j
0 EXTREME DECK SWAY ANALYTICAL:
..c::
C'J I SIMULATION (± lU)
2-moment Gaussian
- - 4-moment Hermite
.s 1.4
~

S
v 1.2
fIl
I:::
0
Po
v 1.0
fIl
s...
v
E
V
0.8
s...
....,
X
V 0.6
I:::
CIl
:aV 0.4
~
0.01 0.02 0.05 0.1 0.2
Damping ratio. <"

Figure 5: Extreme deck sway response.


358

Topics of planned future study include more general environmental models. These may include
variable current profiles with depth, short-crested seas, and, perhaps most importantly, wind
as well as wave and current in possibly different directions. More general structural behavior
will also be considered, including geometric and soil nonlinearities, relative velocity effects, and
fatigue failure prediction.

The wave force models used here are continuations of work with Sverre Haver of Statoil, begun
during his recent visit to Stanford. Financial support for the first author has been provided by
the Office of Naval Research, Contract No. N00014-87-K-0475, and by the Reliability of Marine
Structures Program of Stanford University. The second author has received support from A.S
Veritas Research, a subsidiary of Det norske Veritas.

REFERENCES

Andersen, O.J., E. Fl2lrland, S. Haver, and P. Strass (1987). Design basis environmental condi-
tions for Veslefrikk. Rept. 87004A, STATOIL, Stavanger, Norway.
Borgman, L.E. (1969). Ocean wave simulation for engineering design. J. Waterways Harbors,
ASCE, 95(4), 556-583.
Grigoriu, M. and S. T. Ariaratnam (1987). Stationary response of linear systems to non-gaussian
excitations. Proc., [CASP-5, ed. N.C. Lind, Vancouver, B.C., II, 718-724.
Gudmestad, O.T. and N. Spidsoe (1990). Deepwater wave kinematics models for determinis-
tic and stochastic analysis of drag dominated structures. Proc., NATO-ARW Water Wave
Kinematics, ed. A. Tl2lrum and O.T. Gudmestad, Kluwer Academic Publishers, Dordrecht,
The Netherlands, 57-87.
Haver, S. (1990). On the effects of a joint environmental description and uncertain parameters on
the extremes of a drag-dominated structure. Report RMS-5, Reliability of Marine Structures
Program, Dept. of Civ. Eng., Stanford University.
Koliopulos, P.K. (1987). Prediction methods for non-Gaussian response in linear structural dy-
namics based on a separability assumption. Ph.D. thesis, Civ. Eng. Dept., Univ. College
London.
Krenk, S. and H. Gluver (1988). An algorithm for moments of response from non-normal exci-
tation of linear systems. Stochastic structural dynamics: progress in theory and applications,
ed. S.T. Ariaratnam G.!. Schueller and I. Elishakoff, Elsevier Publishing, Inc., New York.
Lin, Y.K. (1976). Probabilistic theory of structural dynamics, Robert E. Krieger Publishing Co.,
Huntington, New York.
Ll2lseth, R., O. Mo, and !. Lotsberg (1990). Probabilistic analysis of a jack-up platform with
respect to the ultimate limit state. European Offshore Mechanics Symposium, NTH, Trond-
heim, Norway.
Winterstein, S.R. (1988). Nonlinear vibration models for extremes and fatigue. J. Engrg. Mech.,
ASCE, 114(10), 1772-1790.
Winterstein, S.R. (1990). Random process simulation with the Fast Hartley Transform. J. Sound
Vib., 137(3), 527-531.
Winterstein, S.R. and O.B. Ness (1989). Hermite moment analysis of nonlinear random vibra-
tion. Computational mechanics of probabilistic and reliability analysis, ed. W.K. Liu and T.
Belytschko, Elme Press, Lausanne, Switzerland, 452-478.
STOCHASTIC PROGRAMS FOR IDENTIFYING
SIGNIFICANT COLLAPSE MODES IN STRUCTURAL SYSTEMS

James J. Zimmerman*, Ross B. Corotis* & J. Hugh Ellis**


*Department of Civil Engineering
The Johns Hopkins University, Baltimore, Maryland 21218
**Department of Geography and Environmental Engineering
The Johns Hopkins University, Baltimore, Maryland 21218

Introduction

A fundamental step in the estimation of structural system reliability is the identification


of failure modes. Some systems can be represented by one dominant mode but, in general,
several modes must be identified to ensure an accurate estimation of the system probability
of failure. Once modes are identified, standard techniques can be used to evaluate the
system reliability.
In this paper, a technique is presented to determine significant collapse modes in rigid-
plastic structures using mathematical programming. The assumption of rigid-plastic ma-
terial behavior has been widely used to simplify the system reliability analysis of complex
structures [1,3,4,8]. While this mechanics model has limitations, the information that is
obtainable from a reliability analysis using this model is very valuable.
Except for very simple structures, complete enumeration of all possible failure modes
is a difficult task. This difficulty motivates the search for a relatively small number of
modes which give an accurate estimate of the system probability of failure. A technique is
presented to identify failure modes in rigid-plastic structures based not on marginal modal
probabilities of failure, but on the contribution of the mode to the system probability of
failure. A series of nonlinear mathematical programs is solved to determine failure modes
in the order of decreasing contribution to the system probability of failure.

Background

The structural members are idealized as rigid links connected by nodes. The nodes are
idealized as plastic hinges with unlimited ductility occurring at locations of concentrated
loads and member intersections. The effect of axial force and shear on the plastic moment
capacity of the members is neglected.
360

Two standard methods of plastic analysis exist: the static or equilibrium approach,
and the kinematic or mechanism method. The kinematic method is used in the following
analysis to identify mechanisms. Failure of a frame is said to occur when a mechanism
forms. Given a structure with s potential hinge locations and a degree of redundancy r, the
number of elementary mechanisms is m =s- r. Using the superposition of mechanisms
approach, any kinematically admissible failure mechanism can be obtained from a linear
combination of the elementary mechanisms [5,9J. These elementary mechanisms can be
derived either by visual inspection or by an automatic procedure [12J.
Letting Mpj be the plastic moment capacity of the ph critical section and 8j be the
rotation of the ph section, the internal work associated with a mechanism can be written,

Win! = L:Mp ; 18j I (1)
j=l

The external work of any kinematically admissible mechanism can be written in terms of
the external work of the elementary mechanisms, e;, and the participation factors for the
elementary mechanisms, t;,
m
Hr.z ! = L: e;t; (2)
;=1

The safety margin for any mode can then be expressed as the difference between the
internal and external work,
• m

Z = W;nt - Wezt = L:Mpj 18; 1- L:e;t; (3)


j=l ;=1

In this analysis, the safety margins are assumed to follow a joint normal distribution.
With independence between the loads and resistances, the mean and variance of the safety
margin can be written as

mz =1 e IT mM - tTme (4)
ui =1 e IT VM I e I +tTVe t (5)

where mM and fie are the vectors of the mean plastic moment capacities and mean el-
ementary mechanism external work terms, respectively. The covariance structure of the
random variables is specified by the variance-covariance matrices, V M and Ve.

Identification of the Least Safe Mode

The first step in the analysis is to identify the mode with the highest probability of
failure. In order to avoid full enumeration of all possible failure modes, a mathematical
361

program is written to find the mode with the highest probability of failure, considering only
kinematically admissible modes. The mathematical program to identify the least reliable
mode is,

Max.: PI = P[Z ::; 0] = 10

-00
fzdz
-l1tz
= ~(---- ) = ~(-(3)
Uz
(6)

Subject to: OJ - ~O;jti = 0 j = 1, .. ,8 (7)


i=1

where fz is the probability density function of the safety margins (assumed to be Gaussian),
~(-) is the cumulative normal probability distribution, fl is the reliability index, and Oij
is the rotation of the j'h joint in the ith elementary mechanism.
The decision variables in this mathematical program are the rotations of the joints,
OJ, and the elementary mechanism participation factors, t;. There are 8 linear constraints
to preserve kinematic admissibility at each critical section. The objective function is a
nonlinear function of the decision variables.
An alternative mathematical program is to minimize the reliability index, fl, subject
to the same constraint set. The basis (mechanism) at the optimal solution of this program
is the same as that from the solution of the program given by equations 6 and 7 since the
~(-) function is a monotonic transformation.

Identifying the Second Mode

A rigid-plastic structural system can be modelled as a series system where formation


of any mechanism is considered system failure. The system probability of failure is found
from the probability of the union of all failure modes, P[Z1 ::; 0 u Z2 ::; 0··· U Z/c ::; OJ.
For most structures, identification of the least reliable mode does not provide adequate
information to obtain an accurate evaluation of the system probability of failure. After
the least reliable mode has been identified by the solution of the mathematical program
(equations 6 and 7) the next mode to be found is the mode which, when combined with
the first mode, gives the largest value for the probability of mode 1 or mode 2 occurring
(i.e. Max.: P[ Z1 ::; 0 u Z2 ::; OJ).
The mathematical program to find the second mode, having already determined the
least reliable mode is,

(8)
m

Subject to: OJ - ~Oijti =0 j = 1, .. ,8 (9)


;=1
362

where fz,z, is the joint density function for the modes 1 and 2 ( the bivariate normal
density function). Note that ml and CTl are constants determined by the first cycle of the
analysis and only m2, CT2, and Pl2 are functions of the decision variables, OJ and ti·

Identifying the nth Mode

In general, once n-l modes have been identified, the nth mode is identified so as to
maximize the system probability of failure. That is, find the mode such that P[Zl :::;
o U Z2 :::; 0··· U Zn :::; OJ is maximized when n-l modes have already been identified. The
mathematical program to find the nth mode is:

Max.: Pi' = 1 -la ·la


oo
••
oo
fz,oooz"dz 1 ••• dZn (10)

Subject to: OJ - "'L.0ijti = 0 j = 1, .. ,8 (11 )


i=l

where the mean and variance of the first n-l modes are known as well as the correlation
between these modes. Only the mean and variance of the nth mode and the correlation of
the nth mode with the previous modes are functions of the decision variables. The number
of constraints and decision variables is the same as in all previous cycles.
The objective function (equation 10) has to be evaluated many times during the search
for the optimal solution. Unfortunately, multiple evaluations of the multi normal integral in
equation 10 are computationally expensive even when using reduction formulae [10J or with
the use of advanced Monte Carlo techniques. Replacement of the objective function with
Ditlevsen's lower bound [2J for a n-member series system provides an efficient alternative
to the evaluation of the multinormal integral. Using the lower bound approximation, the
objective function of the mathematical program to find the nth mode is

n i-·l
Max.: P[ZI:::; OJ -/- L max{(P[Zi :::; OJ - L P[Zi :::; 0 n Zj :::; 0]) ,O} (12)
i=2 j=l

subject to the constraints of equation 11. Use of the lower bound requires solution of the
mathematical program by non-gradient based techniques (e.g. a Hooke and Jeeves routine
[11]) since gradients for equation 12 are not easily computed.
The lower bound is most accurate when the correlation between the modes is relatively
low. Modes selected by the mathematical programs tend to have low correlation since
these generally contribute more to system reliability than modes with high correlation.
Thus, in the context that the lower bound is being used here, the approximation is quite
accurate.
363

Local Optimality

It can be shown that the mathematical programs set out in eqnations 6-12 are non-
convex programs [l1J. Solutions from the optimization, while Kuhn-Tucker points, cannot
therefore be guaranteed globally optimal. Thus, a strategy to obtain a group of locally op-
timal solutions which contains the global optimum is necessary. As is typical in the solution
of non convex programming problems, a set of starting points is selected and the optimiza-
tion is performed several times beginning at these initial bases. Experience has shown that
using a set of starting points that is physically meaningful is the most successful. The set of
elementary mechanisms augmented with a set of combinations of elementary mechanisms
was used as starting points for the optimization. The number of starting points used was
never greater than 1.5m, where m is the number of elementary mechanisms.
The nonconvexity of the mathematical program can be illustrated. Noting that the
constraints in equation 7 can be solved for the rotations, OJ> these decision variables can be
eliminated from the optimization, leaving an unconstrained optimization problem with
the participation factors, t i , as the only decision variables. For a single story, single
bay structure, there are four elementary mechanisms and thus, four decision variables.
Holding two of the t's constant, the objective function is then plotted. The objective
function surface (probability of failure) is shown in Figure 1 plotted against two of the
participation factors. There is a slight dip between the two ridges indicating two local
optima corresponding to beam and combination failure modes. The combination mode is
the global optimum but it can be seen if a starting point is selected on the left side of the
figure (say t2 s: 0) the optimization will lead to a locally optimal beam failure mode.

Plane Frame Example

The two story, two bay structure shown in Figure 2 was taken from Ma and Ang [6J
for analysis using the procedure outlined for selection of failure modes. The properties for
each material group and load statistics for this structure are shown in Table 1. Moment
capacities of all sections within a material group are assumed to be perfectly correlated, and
capacities of sections belonging to different material groups are assumed to be uncorrelated.
The random variables are distributed such that the corresponding safety margins are jointly
normal.
The least reliable mode was found by solving the mathematical program of equations 6
and 7, and by minimizing the reliability index, {3, rather than maximizing the probability
of failure. Eighteen starting points were used and optimal solutions were found using a
364

gradient-based nonlinear optimization program, MINOS [7]. To find the remaining modes,
the mathematical program shown in equations 11 and 12 was solved by using Ditlevsen's
lower bound as the objective function. The constrained problem was converted to an
unconstrained optimization by solving the constraints and substituting into the objective
function. A modified Hooke and Jeeves direct search technique was used to find solutions
to the mathematical program.
The solution from these mathematical programs is shown in Table 2. Due to the non-
convexity of the problems, locally optimal solutions were found using the various starting
points at each cycle. Only the best solution from each cycle was retained. The first column
shows the cycle number and the second column describes the mode type found during that
cycle. The third column lists the reliability index for the mode listed in column 2. The
fourth column shows the value of the objective function at each cycle (i.e., the value of
Ditlevsen's lower bound using this mode and the previously determined modes). After the
sixth cycle, no modes could be found which caused any significant increase in the estimate
of the system probability of failure. Based on these six modes, an estimate for the system
probability of failure using the lower bound was 0.131.
To determine the accuracy of using the lower bound, Monte Carlo simulation was
performed using the six identified modes and an estimate for Pi' was found to be 0.132.
Ma and Ang estimated the system probability of failure to be 0.135 by performing Monte
Carlo simulation with 48 modes identified by enumeration.

Conclusions

Mathematical programs are formulated to identify significant collapse modes in rigid-


plastic structures with random moment capacities and loads. Failure mode identification
is based on the contribution of the identified modes to the system probability of failure
rather than the modal marginal probability of failure.
The mathematical programs presented are nonconvex, leading to the identification of
locally optimal solutions. The selection of physically meaningful starting points can help
in efficiently identifying a set of local optima containing the desired solution.

References

1. Augusti, Guiliano, et. al. Probabilistic Methods in Structural Engineering. Chap-


man and Hall: London, 1984.
2. Dit.levsen, Ove. "Narrow Reliability Bounds for Structural Systems." Journal of
Structural Mechanic.!, Vol. 7, No.4, 1979, pp. 453-472.
365

3. Ditlevsen, Ove, and P. Bjerager. "Methods of Structural System Reliability." Struc-


tural Safety, Vol. 3, 1986, pp. 195-229.
4. Ellis, J. H., R. B. Corotis, and J. J. Zimmerman. "Analysis of StructllTal System
Reliability with Chance Constrained Programming." Proceedings of the 1st Int'l
ConI. on Computer Aided Optimum Design of Structures~Sollthh-';:.l~pton;lfjC;i989.
5. Hodge, Philip G. Plastic A_nalys~ of Struc~~es. McGraw-Hill: New York, 1959.
6. Ma, H. F. and A. H-S. Ang. "Reliability Analysis of Redundant Ductile Structural
Systems." Civil Engineering Studies, Structural Research Series No. 494, University
of Illinois, Urbana, 1981.
7. Murtagh, B. A. and M. A. Saunders. MINOS 5.1 Users Guide: Technical Report
SOL 83-20R. Syst.em Optimization Labor~to~y~Dep~rtment of Operation Research,
Stanford University, 1987.
8. Nafday, A. M., and Ross B. Corotis. "Failure Mode Enumeration for System Reliabil-
ity Assessment by Optimization Algorithms." Reliability and Optimization of Struct-
ural Systems, P. Thoft-Christensen, ed., Springer-Verlag: New York, pp. 297-306.

9. Neal, Bernard G., and P. S. Symonds. "The Rapid Calculation of the Plastic Collapse
Load for a Framed Structure." Journal 0/ the Institution of Civil Engineers, part 3,
Vol. 1, 1952, pp. 58-71.
10. Plackett, R. L. "A Reduction Formula for Normal Multivariate Integrals", Biometrika,
41, 1954, pp. 351-360.
11. Smit.h, Alan A., E. Hinton, and R. W. Lewis. Civil Engineering Systems Analysis and
Desi~n~ New York: John Wiley and Sons, 1983.

12. Watwood, Vernon B. "Mechanism Generation for Limit Analysis of Frames." Journal
of the Structural Division, ASCE, Vol. 109, No. S'1'l, Jan., 1979, pp. 1-15.
366

Beam Combination

Figure 1: Sample Objective Function Section

F3 F4

P
M4 M6

M3 M3 M3
F} F2

2P
M2 Ms

Ml Ml Ml

Figure 2: Example Structure


367
--------- - - - - - - --_.. _-----
Variable Mean C.O.V.
Ml 70k-ft 0.15
M2 150k-ft 0.15
M3 50k-ft 0.15
M4 90k-ft 0.15
M5 150k-ft 0.15
Ms 90k-ft 0.15
Fl 38k 0.15
F2 20k 0.25
F3 36k 0.15
1'\ 20k 0.25
P 7k 0.25

Table 1: Moment Capacity and Load Statistics

Cycle Mode Type f3mode Estimated Pia


----~-
- - - - - - - .-------.- ----_.-.-----
1 Combination 1.805 0.0355
2 Beam 1.852 0.0663
3 Beam 1.852 0.0961
4 Beam 2.287 0.1061
5 Sway 1.886 0.1228
6 Beam 2.065 0.1310

Table 2: Results of Example Problem


CRITICAL CONFIGURATIONS OF SYSTEMS SUBJECTED TO
WIDE-BAND EXCITATION

Takeru Igusa
Department of Civil Engineering, Northwestern University
Evanston, Illinois, 60208 U.S.A.

SUMMARY
The relationships between the parameters of a structural system and its wide-band
response are explored. The goal is to identify and characterize critical system
configurations for which the response may become highly sensitive to small parameter
variations. Such configurations are useful in design applications since minor
modification of the system may lead to a significant reduction in response. The
investigation, which centers on an examination of analytical expressions for the modal
properties and mean-square response, shows that critical systems can be identified with
mode crossing sets defined in the space of variable parameters. Emphasis is on
characteristics associated with the intrinsic dynamic properties of the system rather than
user-defined parameterizations. Two examples demonstrate the insight the results of this
paper provides into the system dynamic properties and their relation to the response.

1. INTRODUCTION
The relationships between the parameters of a system and its response to dynamic
loads are important in design and reliability assessment of structural systems. Although
such relationships can be quite general, there frequently exists critical configurations for
which small variations in the system parameters lead to large variations in the system
response. Such configurations are useful in design applications since minor modification
of the system may lead to a significant reduction in response. In addition, they have a
Significant impact on probabilistic analysis since small uncertainties in the system
parameters may lead to large uncertainties in the response. Consequently, the
identification and characterization of critical configurations is an important problem in
structural dynamics.
It is well known that for harmonically loaded systems, two critical configurations are:
resonance, which is easily identified by the natural frequencies of the system, and
nodalization, where modal harmonic responses cancel each other at certain points, or
nodes of the structure [1]. The identification of such critical configurations is useful in
developing optimal design strategies and has found applications in aerospace and
rotorcraft systems which experience harmonic loads [1-4]. Studies of periodic and nearly
periodic systems have revealed that the propagation of waves are sensitive to the
variations of the coupling mechanisms and other dynamic properties of each repeating
370

substructure [5,61. It has been shown both mathematically and experimentally that such
sensitivities are due to localization of the response [7,81.
Recent work by Igusa and Der Kiureghian [91 has shown examples of critical system
configurations for systems subjected to wide-band excitation. However, due to differences
of the nature of the excitation, critical configurations for wide-band excitation differ from
those found for harmonic loads and wave propagation problems. The goal of the present
work is to develop a method to predict, identify, and characterize such configurations.
The identification problem is restricted to dynamiC aspects of the problem in order to
exclude cases of ill-defined parameterizations which can be resolved by simple re-
formulation of the parameters. Due to the wide range and complexity of the general
problem, attention is focused on parameters that affect stiffness-related properties of linear
systems. Damping, loading, and material and geometric non-linearities can also be
considered variable parameters, but such considerations are beyond the scope of this paper.
In the first part of this paper systems consisting of two coupled modes are analyzed.
It is found that the only possible critical configuration is where the frequencies of the two
modes approach each other. It is shown that at such configurations, the modal properties
undergo a sudden shift of character, which was first noted in an entirely different context
by Leissa [101 and was termed "curve veering". The corresponding points in the parameter
space are defined to be the mode-crossing set.
The sensitivity of the response is found to be governed by two parameters: a modal
coupling parameter and a ratio of modal response coefficients. It is shown that
amplification or reduction of the wide-band response at a mode crossing may occur and
can be predicted by the values of these parameters. However, for some parameter values,
the effect of a mode crossing may have little influence on the response, even if severe
"curve veering" of the eigenvalues occurs.
The generalization to systems with more than two modes has been given in
Reference [111. In this paper, a brief summary of those results are presented.
The concepts in this paper are illustrated by two contrasting examples.

2. FORMULAnON OF THE PROBLEM


Consider a linear dynamic system with stiffness, damping, and mass matrices given
by K, C, and M, respectively. The response x(t) of the system to a forcing function f(t) is
Mx +Cx + Kx=f (1)

This formulation is applicable to lumped-mass or continuous systems using appropriate


coordinate systems [121. Herein, attention is on forcing functions and response variables of
the form
f(t) = qw(t); y(t) = rTx(t) (2)

respectively, where q and r are constant vectors and the scalar function w(t) is a white-
noise process with constant power spectral density Go. General filtered-white-noise
371

excitation can be modeled by a simple reformulation of the system matrices in equation (1)
[13].

Let P denote the vector of parameters which affect the stiffness properties of the
system by the functional relationship K = K(p). As stated in the introduction, it is assumed
that the damping and mass matrices are constant with respect to p. It is also assumed that
the system is classically damped (where the free vibration undamped mode shapes
diagonalize the damping matrix) and that the modal damping ratios are small in relation
to unity. The parameterization of the system is not of interest in this paper; therefore, by
proper definitions of system parameters, it can be assumed that the stiffness matrix is not
highly sensitive to variations in the parameters
dkik kik
for all i,kJ (3)
dPj - Pj

The mode shapes, frequencies, and modal damping ratios of the system are denoted by
fl»i(P), Wi(p), and t;i' respectively, where the modal masses are normalized to unity.

The root-mean-square of the response y(t) is the simplest and most fundamental
response quantity for wide-band excitations [12] and is used throughout this study. Since
the parameter p affects the stiffness matrix, which in turn influences the response y(t), the
root-mean-square response is a function of p and is denoted R(p). A system is considered
critical if small variations of the parameters p result in large variations of the response R.
One possible measure of sensitivities is the response ratio
R(Pl)
A(p l'P2) = R(P2) (4)

where the parameter points Pl ,P2 are chosen from distinct regions of the parameter space
defined in general terms based on the dynamic properties of the system. Large, small, and
unit values of A(Pl'P2) correspond to amplification, reduction, and no change of the
response at Pl relative to the response at P2' The numerical values for Pl'P2 are
dependent upon the particular parameterization of a particular system and can be treated
separately. In this way, the problem is effectively decomposed into the dynamic aspects of
the system and the parameterization. As stated in the introduction, the emphasis on this
paper is on the former.
The possible critical configurations for wide-band response is a subset, M, of the
parameter space P and is termed the critical set. In terms of the measures in equation (4), M
is de fin e d t 0 be Z Z Z(Pl E P such that A(Pl ,P2)« 1 or
A(Pl ,P2)>> 1 for some P2 in a neighborhood of Pl \. The sensitivity measure A(Pl ,P2)
will be used in this paper to identify and characterize M in detail.

3. MODAL PROPERTIES OF TWO-MODE SYSTEMS


A system whose wide-band response is described by a single mode has no critical
configurations [11]. (This is unlike the harmonic response case, where a critical
configuration for a lightly-damped single-mode system always exists at resonance.) The
simplest, non-trivial system with critical configurations is a two-mode system. In this
372

section a thorough modal analysis is performed to gain insight into its dynamic
characteristics.
Consider a system with two modes described by a stiffness matrix K(p) dependent on
parameters p. To establish a frame of reference, the base system is defined by zero
parameter values, p = o. Let the base natural frequencies and mode shapes corresponding
~

to the base system be denoted Wi and .i, for i = 1, 2. The modified modal properties
correspond to non-zero parameter values and are determined by the eigenvalue problem
for the modified system given by the 2 x 2 matrix equation
[K(P) - ah] Q = 0 (5)

where K(p) is the stiffness matrix function in modal coordinates

.i .i
K(p) = [~ll(P) ~12(P)] (6)
kl2(p) k22(p)
~T
where kik = K(p).k and the modified mode shapes are given in terms of Q by the
relation
(7)

The terms kl2(p) are cross-modal stiffnesses which couple the responses of the two modes.
For notational convenience, the diagonal terms are rewritten as
(8)
for i = 1, 2 where w&i (p) are termed shifted frequencies whose physical interpretation will
be discussed later in this section.
.- 2 _ _2
For the base system, p = 0, kt2(O) = 0, and COSi(O) = kii(O) = COi, and the eigenvalue
problem yields the original frequencies and mode shapes. For the modified system, the
characteristic equation is a bi-quartic and the eigenvalue problem is readily solved.
Sackman and Kelly [14] and Sackman, et al. [15] solved the problem for a special class of
two-degree-of-freedom systems and these results are generalized for all two-mode systems
to yield

cof = co~a (I ± ...; r/ + l) (9)

r-1-iid:B' + ,I j rt-'t-ii±~:B' + ,I)1


(10)
Q, = ( 1 +

for i = 1,2, in which the plus sign is associated with i = 1 and the minus sign with i = 2,
CO~a = (CO~1 + CO~2)12 is the average shifted frequency, and
_ 2 2
2kI2(p) fJ = COSI - (QS2
r = 2 2 2 2
(11)
COSI + COS2 COSI + (QS2
373

are non-dimensional parameters measuring modal coupling and frequency tuning,


respectively. The expression for the modified mode shapes in terms of the original modes
is given by equations (7) and (10)

~)t +r- 1(-Ii ± ..J{l- + i)4,2 for i = 1, 2 (12)


IIItj = 1/2

(1 + r-2[-Ii±..J{l-+ ij )
The parameters chosen in these results are particularly suited for interpreting the
characteristics of the modal properties and the wide-band response.
In order to gain greater insight into the modal properties, special cases are examined
in detail. First, consider the case llil» ,/"
which corresponds to widely-spaced shifted
frequencies in relation to the coupling parameter. The expressions for the modified modal
properties simplify to
IIItj = IIItj; if (US1 > (US2 , for i = 1, 2 (13)

(14)
Next, consider f3 = 0, which corresponds to identical shifted frequencies. In this case,
equations (9) and (12) reduce to

(U~ = (U~a ( 1 ± r) ; IIItj = -h- [1IIt1 ± 1IIt2 ] for i = 1, 2 (15)

Equations (13) and (14) show that there is a one-to-one correspondence between the modes
of the modified system with widely-spaced shifted frequencies, and those of the original
system. The mode shapes of the modified system are identical to those of the original
system and the frequencies are given by the shifted frequencies. However, for modified
systems with closely-spaced modes where f3 and r are the same order of magnitude, terms
containing the parameter r in equations (9) and (12) become significant. Consequently,
each mode of the modified system contains characteristics of both modes of the original
system. At f3 = 0, equation (13) shows that each mode of the modified system equally
shares characteristics of both modes of the original system.
Equations (13) and (14) show that each mode of the modified system changes in char-
acter from one mode of the shifted system to the other when the value of (US1 - (US2
reverses sign. This change was first noted by Leissa [10] in the examination of rectangular
membranes. An important question is whether this change is real or artificial, brought
upon by the assignment of the index i to the plus/minus sign in front of the radicals in
equations (9) and (12). If these equations for the modal properties are examined for a con-
tinuous change of the shifted frequencies, it can be seen that mode 1 of the modified sys-
tem takes predOminantly mode 1 character of the unmodified system when (US1 > (US2, but
continuously changes to mode 2 character when (US2 crosses and exceeds (US1. A similar
statement can be made for mode 2 of the modified system. In conclusion, the change of
modal properties will always occur (except for the trivial case of decoupled systems defined
bv y= 0). and is auantified bv eauations (9) and (12). Herein. this transition. mathematicallv
374

defined by the change of algebraic sign of the difference of shifted frequencies, WSI - WS2, is
called a mode crossing (using terminology suggested by Triantafyllou [16]). The
corresponding points in the parameter space P is defined to be the mode crossing set Mme

Mme = {PE P such that WSl(P)= WS2(P)} (16)

The mode crossing concept is illustrated in Fig. 1, where the modified frequencies Wi
are plotted with respect to a scalar parameter, p. Three pairs of curves are shown
corresponding to three levels of interaction, in which the lower member of each pair
represents Wi, and the upper member represents W2. The curves closely follow those
given by the shifted frequencies WSI (p) and WS2(P) except at the mode crossing. The dotted
lines represent the limiting case of no interaction (y --+ 0), defined by equations (13) and
(14). The lines appear to cross each other, but are actually sharply angled due to a sudden
transition at perfect tuning. Leissa [10] used the term "curve veering" to describe this
phenomena. The other sets of lines represent cases of larger interaction, which results in a
separation of frequencies in the transition and a wider transition zone.

en
Ol
·0
c::
Ol
::l
c-
-....
Ol

Parameter, p
Figure 2. Frequency vs. parameter relation for a 2-DOF system.

The rate of transition (degree of curve veering) at a mode crossing can be rigorously
determined by evaluating the mode derivatives using the closed-form analytical results
derived in this section. It can be shown that the maximum value of the mode shape
derivative is at the the mode crossing, and that the derivative is inversely proportional to
the interaction parameter, 1-
The derivative of the first frequency, aWl / apj' does not have a large maximum value
at a mode crossing, but undergoes a transition from
aWSl
for WSI < WS2
apj
{ (17)
aWS2
for WSI > WS2
dPj
375

as illustrated in Fig. 1. The rate of transition is measured by the second derivative of the
frequency. It can be shown that the second derivative of the frequency is twice the
coefficient of the first derivative of the mode shape. Therefore, the conditions for large
mode derivatives apply to both the mode shapes and frequencies. In other words, the rate
of change of the mode shape at a mode crossing is directly proportional to the frequency
curve veering curvature.
In summary, mode crossings have been defined by the crossing of shifted frequencies
calculated from the diagonals of the modal stiffness matrix. Closed-form analytical
expressions were used to show that modal properties of two-mode systems can become
sensitive to changes in the system parameters only at mode crossings. The degree of
sensitivity is measured by an interaction parameter, where the sensitivity increases for
smaller interaction. This conclusion can be expressed mathematically as follows: Let My(E)
denote the set of parameter points corresponding to systems with with small interaction
parameter values
My(E) = (p E P such that r- E) (18)

where E« 1. Then, the modal properties are sensitive at mode crossings provided that
p E My (E). It is noted that this condition is independent of the system damping.
Another conclusion is that the spacing of the modal frequencies is governed by the
parameters f3 and y: Modes i and j are closely-spaced if
f3 - r- 'a,ij (19)

and widely-spaced if

f3 » 'a,ij or (20)

These criteria represent extreme cases, and in this paper, it is necessary to add an additional
intermediate category of moderately-spaced modes, defined by

1 »~ 7? + r » 'a,ij (21)

Returning to the notion of critical sets, the concepts of mode crossings, closely-spaced
modes, and small interaction can be combined to yield the following concise relation
M c Mer n My('a,ij) (22)

The information and analytical expressions developed in this section is used in the
investigation of the response of two-mode systems to wide-band excitation.

4. WIDE-BAND RESPONSE OF TWO-MODE SYSTEMS


The root-me an-square response R(p) of the two-mode system for any parameter
value is obtained by substituting the solutions for the modal properties in equations (9)
and (12) into a modal combination expression for wide-band response. To maintain
'2
tractable analytical results, equal damping ratios, " = = " is assumed. Details of the
response evaluation can be found in [11]; herein, the final results are summarized.
376

The response ratio A(PI ,P2) is used to quantitatively determine whether a parameter
value PI corresponds to a critical configuration. According to equation (22),
PI E Mer n Mr(t;a,ij), i.e., PI is chosen among systems at a mode crossing where f3 = 0 and
with low modal interaction. The point P2 is chosen in the neighboring region
corresponding to moderately-spaced modes, which satisfies equation (21).
To reduce the number of variables in the response expressions, define
nondimensional variables
~ rk ~ qk (23)
rij = rl; q ij = 7fi
where qi and ri are modal response and input coefficients, respectively, defined by

for i = 1, 2 (24)
and k,l is a permutation of i,j chosen such that Wk rkl ~ Iq I rll. The final expressions for the
response ratio A(p 1 ,P 2) is [11]

A(Pl'P2) = ~~:~~ '" k (l+ri2lli2)-112[/1+r12ll1i(1 + pd + /rI2+lld 2/1- pd


] 112
(25)

where the correlation coefficients are

(26)

To determine if PI corresponds to a critical configuration, and to gain further insight


into the characteristics of the two-mode system response, the response ratio is examined
for three cases.
1.5 ....----.---....----.---.-----.----,r----.----,

0
.~
.... 1.0
Q)
m
c:
0
a.
Q)
m
a: 0.5

0.0 L - _ - ' - _........_ - - '_ _ _...L.-_........._ - - ' -_ _&..-----'


-1.0 -0.5 0.0 0.5 1.0

Figure 2. Response ratio vs. normalized participation factors.

Case I: q12 - r12. For very small interaction r« t;, the correlation coefficient in
equation (26) is approximately P12 '" 1, the first term in brackets in equation (25) is domi-
nant, and
377

1 + Tdl 12
A(Pl'P2) '" ---,===~~ (27)
..; 1 + Ti2qi2
which is a function of the product T12q12. By the definition in equation (23), the range of
possible values for T12q12 is -1 ~ T12 q12 ~ 1; therefore, A(PI ,P2) can be fully characterized by
evaluating equation (27) for all permissible values of T12q12. The result, plotted in Fig. 2,
shows three possible effects. When T12q12 '" 0, then A(Pl'P2) '" I, meaning the response is
unaffected by the changes of modal properties at a mode crossing. However, when
T12q12 ~ -I, then A( P l'P2) ~ 0, corresponding to a cancellation effect between modes, and
when T12q12 ~ I, then A(PI ,P2) ~ Y2, corresponding to a summation effect.
For larger interaction where the approximation P12 '" 1 is invalid, the full general
expression for the response ratio in equation (25) is applicable.
Case II: IT12I» Iq121. By comparing terms in equation (25) it can be shown that
1 + P12 + rr2 (1 - pn) for 0(r12) = 0(1) or rr12l« 1
2
A(PI ,P2) (28)
~ ~ I-P12
T12 ~2 ~2
2(1 + Q12 112)
The first result shows no change of the response at the mode crossing. The second result
implies a very large increase of the response at a mode crossing, and is termed the
magnification effect.
Case III: IQd »lTd . Due to the symmetry of equation (25), this case is similar to case II
with q12 replacing T12 in equation (28)

for O({jd = 0(1) or Iq121« 1


(29)

This completes the analysis of the two-mode system. It has been shown that the
parameter values for critical system configurations lie within the mode crossing set Mmc
and must satisfy small modal coupling conditions. The modal properties for such systems
are characterized by large mode derivatives and closely-spaced natural frequencies with
curve-veering behavior. The influence of mode crossings on the response was found to be
dependent upon the normalized input and response modal coefficients and can have one
of a variety of effects on the response: cancellation, no effect, summation, and
magnification. These conclusions are based on the dynamic properties of the system rather
than its parameterization, and are backed by mathematical expressions for the modal
properties and wide-band response.
378

5. GENERALIZATION TO SYSTEMS WITH MORE THAN TWO MODES


It has been shown that for systems with more than two modes, the concept of mode
crossings can extended to pairs of modes, with the possibility of mode crossing points
involving more than one pair of modes. For an m-dimensional parameter space, the
mode crossing set consists of m-J dimensional surfaces. It was found that the response
functions developed for the two-degree-of-freedom case are contained in the more general
response functions for the larger system. This was used to show that the effects of mode
crossings on the two classes of systems are fundamentally the same.

6. EXAMPLE SYSTEMS
6.1 COUPLED BEAMS

Two contrasting example systems are examined to illustrate the main ideas of this
paper. The first system, shown in Fig. 3, consists of two simply-supported beams connected
by a moment spring with stiffness ke. Beam (A) has unit length, unit mass per length, and
modulus and moment of inertia such that its fundamental frequency is 1 cycle per second
(Hz). Beam (B) has the same mass per length, modulus, and moment of inertia as beam
(A), but has variable length Lb. A single value of modal damping, 1;, is specified for all
modes. The load is a force uniformly applied along one-fourth of the span of beam (A)
with white-noise amplitude, as shown in Fig. 3. The response is the mean square of the
velocity of beam (B), averaged over the length of the beam.

f(t)

+++++ ~
~·"""b·e·a·m·a......:·rj":""·b·e·am"b"""~~
Figure 3. Moment-spring coupled beams.

2
The parameters of the problem are p = (IC,Lb), where IC= kel (1611C ) is the moment
spring stiffness normalized by the rotational stiffness of beam (A) at a supported end. The
root-mean-square responses normalized by the damping ratio, I; R(Lb,IC), are shown in Fig.
4 for three modal damping values: I; = 0.003 (dotted line), 0.01 (solid line), and 0.03 (bold
line), and for three coupling elements: IC = 0.05, 0.5, and 5.0, which correspond to light,
moderate, and strong coupling, respectively.
There are several interesting characteristics of the response curves. First, and most
significant are the response peaks at Lb = 1.0, 1.5, and 2.0. For light coupling the peaks are
an order of magnitude, for moderate coupling the peaks become attenuated, and for strong
coupling the response is relatively constant. Second, variations in the damping has a
significant effect only for the lightly-coupled system. Third, for light damping, the
responses are nearly independent of the spring coupling at Lb = 1.0 and 2.0. The last
379

.0
-l
(f
Q)
en
.1
c::
o
0.
en
Q)
a:

.01~~--~~--~--~~--~~~~--~~--~

0.75 1.00 1.25 1.50 1.75 2.00 2.25


Length, Lb
Figure 4. Root-mean-square response vs. beam length for /C = 0.05, 0.5, and 5.0.
-----, , =0.003; _ , , =0.01; _ _ , , =0.03.
characteristic is unexpected since it implies that the same amount of energy is transmitted
through the coupling element regardless of the stiffness of the element.
The three response characteristics are examined more closely using the framework of
analysis developed in this paper. First, mode crossings are investigated. The shifted
frequencies are [11]
2
W~i = i4 (2nl + it 1;) /C for 1 ~ i~ n (30)

W§j = (j~t [(j_n)2 (27r)2 + (1; r


/C] for n <j~ 2n (31)

which correspond to the uncoupled natural frequencies of beams (A) and (B), respectively.
It can be seen that mode crossings can only occur between natural frequencies of different
beams and that the system is, at most, pair-wise tuned. The modal coupling parameter is

r".. = i (j-n)
2 2
(16)2
7r /C (32)
LlI...wSi+ WSj)

which is directly proportional to the normalized rotational stiffness /c. This parameter
quantifies the relation between the dynamic coupling of the modes and the physical
coupling by the rotational spring.
380

When Lb= 1, then {J)Si = {J)S,i+n for i = 1, ... , n, signifying n pairs of mode crossings
between the modes of beams (A) and (B). The remaining mode crossings are found by
solving {J)Si = {J)Sj, which for small IC has solutions

for iJ = 1, ... , n (33)

Theoretically, the number of values of Lb corresponding to a mode crossing is of order n 2;


however, for the particular problem in Fig. 3, only the first six modes are significant in the
response calculations. In the range of beam lengths 0.75 ~Lb ~2.25 the mode crossings
involving the first six modes of the system are tabulated in Ref. [11] along with the
corresponding modal coupling parameters.

The mode crossings correspond precisely to the location of the response peaks in Fig.
4. To analytically determine the effect of the mode crossings on the response, the response
ratio A (PI ,P2) for the response of the tuned mode is evaluated using the expression in
equation (25). The result is
Pij
A(PI ,P2) '" (34)
,j it + 41;2
A number of observations can be made from this simple expression.
1. As the coupling stiffness IC is increased, rij increases proportionately (according to
equation (32» and the amplification factor decreases. This explains the attenuation of
the response peaks in Fig. 4 at the three mode crossings for larger coupling stiffnesses.
2. As the coupling stiffness approaches IC -+ 0, then rij -+ 0, and the amplification factor
approaches A (PI ,P2 ) -+ Pii 121; . This implies that the amplification is inversely
proportional to the damping, which is shown for the lightly-coupled system (IC = 0.05)
in Fig. 4. It also implies that for sufficiently small coupling stiffnesses, the responses
should be independent of IC.
3. As the damping approaches I; -+ 0, the amplification factor increases to
A (PI ,P2) -+ Pii I r.j which is independent of the damping. This is also shown in Fig. 4
for both the moderate and light coupling cases where the response peaks increases
and asymptotically approaches a limiting shape for decreasing damping.
It was noted earlier that the normalized responses at the mode crossings are
independent of the coupling parameter at sufficiently small damping. This can be
explained by virtue of the facts that the correlation coefficient becomes Pij '" 0, and the
modal properties at a mode crossing are nearly identical for different coupling values.
Therefore, the mean-square responses are nearly independent of the coupling parameter.
These analytical conclusions are illustrated in this numerical example by the response
peaks at Lb = 1.0 and 2.0 in Fig. 4 which have nearly equal amplitudes for all coupling
values at I; = 0.003. Similar behavior is also observed at Lb = 1.5 for smaller damping
values, but the additional curves are not plotted in Fig. 4 to preserve clarity.
381

6.2 SPRING-SUPPORTED BEAM

Consider a simply-supported beam with two spring supports at one-third span inter-
vals as shown in Fig. 5. The beam has unit length, unit mass per length, and modulus and

value of modal damping, ,=


moment of inertia such that its fundamental frequency is 1 cycle per second (Hz). A single
0.02, is specified for all modes. The two spring constants, k 1
and k2' are the variable system parameters. The input force is a white-noise excitation
applied at the midpoint of the beam, and the response of interest is the angle of deflection
at the left end of the beam at the simple support.

fit) ~

~, Figure 8. Spring-supported beam.

The first three modes have the most significant contribution to the response and
their properties are investigated in detail. According to the results of this paper, the critical
configurations of the system are the mode crossings which, for the present system, occur
between the third mode and each of the first two modes. It can be shown that the
corresponding values for the support stiffnesses, which defines the mode crossing set M mc ,
is
2 2
(kl-131,. ) (k2-131,. ) = 40,.4 (35)
Since there are two unknowns and one equation, the mode crossing set consists of curves
in the two-dimensional parameter space. The curves are hyperbolic with two branches at
ki < 131,.2 and ki > 131,.2 which correspond to the mode crossings between the first and
third modes and the second and third modes, respectively, as indicated in Fig. 6.
Analytical expressions for the modal properties have been used [11] to investigate the
responses at parameter points PI = (kbk2) at a mode crossing and P2 = (k' l'k' 2) in the
neighborhood of PI corresponding to moderately-spaced modes. The result is
-54521 + 85.745kI - 0.033712kr (36)
Ai (PI ,P2) r===========:==============:====~::::;:=======c=
-V1.8218x109 - 5.7128x106 ki + 6720.8kr - 3.5155kf + 0.OOO68988kt
for i =1,2.
The amplification ratio is plotted in Fig. 7 and exhibits a peak value of approximately 1.7 in
the neighborhood of ki = 131,.2 and values small in relation to unity outside of this
neighborhood. In terms of the mode crossing set in Fig. 6, an increase in the
382

500 I I I I I I I I

r- -
r- -

r- -

100 r- """ -
r- -

50 I I I II I I I

50 100 2 500
k 1/ 7r
Figure 6. Mode crossings of the spring-supported beam in parameter space.

0
.~
~

c::
0
•.0:;
ro
0
1
:E
a.
E
«

0
50 100 2
500
k 1/ 7r

Figure 7. Amplitude ratio vs. first spring constant.

response is expected near kl = 1311l, which corresponds to the vertical asymptotes, and a
decrease is expected in the horizontal asymptotes.
The results and conclusions concerning the location of the mode crossings (equation
(35» and the behavior of the response (equation (36» have been derived entirely from the
analytical results of this paper. These results and conclusion are verified by numerically
evaluating the root-mean-square response using the exact modal properties and full modal
combination using standard numerical analysis procedures.
383

500

100

50
50 100 2 500
kd1r

Figure 3. Contour plot of root-mean-square response in parameter space.

Figure '<t. Three-dimensional plot of root-mean-square response in parameter space.


384

Complete qualitative views of the response surface are shown in a contour plot in
Fig. 8 and in a three-dimensional perspective in Fig. 9 for 251r2 ~ k 1 , k2 ~ 2501r2 . As
predicted in the preceding discussion, significant changes of response levels occur at the
mode crossing set of parameter values. Both the reduced and amplified response levels
predicted by equation (36) are exhibited at the mode crossings, which can be seen by
comparing the mode crossing set in Fig. 6 with the response values in Figs. 8 and 9. It is
noted that for parameter values away from the mode crossing set, the response variations
becomes stable and nearly linear which supports the hypothesis that the critical
configurations for wide-band response of the system are found only in the mode crossing
set.

SUMMARY
Several new results have been established in this paper. First, critical configurations
of systems subjected to wide-band excitation have been identified. It has been shown that
these configurations correspond to mode crossings, which are defined in terms of the
natural frequencies of the system. Next, the dynamic characteristics of the system at mode
crossings was investigated using modal analysis. The key results of this section of the
paper are the analytical description of the curve-veering behavior at mode crossings.
Third, the wide-band response characteristics of two-mode systems at mode crossings was
examined and was found to be dependent upon modal interaction and the input and
response modal coefficients. Both amplification or reduction of responses was found to be
possible, and analytical results were obtained to predict the degree of response variation.
Finally, parallels with more complex multi-mode systems and two-mode system were
established, which allows the dynamic analyst to extract the key characteristics of the
systems' wide-band response.
All of the results are based on dynamic response characteristics, rather than user-
defined parameterizations, and reflect intrinsic dynamic characteristics of the problem.
The analytical expressions of this paper provide insight into the relationships between the
dynamic properties of the system, the modal characteristics, and the mean-square response.
This insight was demonstrated by two examples, in which all of the dominant
characteristics of the responses were identified with specific dynamic properties of the
systems.
The results of this paper can be applied to a number of related problems. First, the
response/parameter relationships can be applied to the problem of optimal design of
dynamic structural systems. Such relationships have already been used for harmonic
loads [1,3], and the new results in herein are applicable to design for stochastic loads.
Second, the basic approach can be used to identify other critical system configurations such
as those related to damping or nonlinear effects. Third, the knowledge of the critical
parameters and response/parameter relationships can be used to enhance stochastic finite
element analysis of dynamically loaded systems such as those described in references
[17,18]. Fourth, critical sets can be used in parameter reduction problems in which large
numbers of uncertain or design parameters pose computational difficulties. The critical set
385

can be used to identify a much smaller set of parameters or a small set of newly redefined
parameters which have the most influence in the response. Finally, the analytical results
may provide additional insight into curve veering investigations of nearly periodic
systems [7,8,19].

ACKNOWLEDGEMENTS
This research was supported by the National Science Foundation under Grant No.
CES-8707792, Dr. S.-C. Liu, Program Director and the Office of Naval Research under
Contract No. 88K-0514, Dr. A. J. Tucker, Scientific Officer. This support is gratefully
acknowledged.

REFERENCES
1. R. G. LOEWY 1984 Journal of the American Helicopter Society, 29, 4-30. Helicopter
vibrations: a technological perspective.
2. J. F. BALDWIN and s. G. HUTTON 1985 American Institute of Aeronautics and
Astronautics Journal, 23, 1737-1743. Natural modes of modified structures.
3. W. C. MILLS-CURRAN and L. A. SCHMIT 1985 American Institute of Aeronautics and
Astronautics Journal, 23, 132-138. Structural optimization with dynamic behavior
constraints.
4. B. P. WANG, A. B. PALAZZOLO, and W. D. PILKEY 1982 State of the Art Survey of Finite
Element Methods, ASME, New York, Chapter 8. Reanalysis, modal synthesis, and dy-
namic design.
5. C. H. HODGES 1982 Journal of Sound and Vibration, 82, 441-424. Confinement of
vibration by structural irregularity.
6. O. O. BENDIKSEN 1987 American Institute of Aeronautics and Astronautics Journal
25(9),1241-1248. Mode localization phenomena in large space structures.
7. C. PIERRE and E. H. DoWELL 1987 Journal of Sound and Vibration, 114, 549-564.
Localization of vibrations by structural irregularity.
8. C. H. HODGES and J. WOODHOUSE 1983 Journal of the Acoustical Society of America,
74,894-905. Vibration isolation from irregularity in a nearly periodic structure: theory
and measurements.
9. T. IGUSA AND A. DER KIUREGHIAN 1989 Journal of Engineering Mechanics, 114, 812-
832. Response of uncertain systems to stochastic excitation.
10. A. W. LEISSA 1974 Journal of Applied Mathematics and PhYSics (ZAMP) 25, 99-111.
On a curve veering aberration.
11. T. IGUSA Journal of Sound and Vibration (submitted paper). Critical configurations of
systems subjected to wide-band excitation.
12. Y. K. LIN 1976 Probabilistic Theory of Structural Dynamics, Krieger Publishing,
Huntington, New York.
386

13. D. A. GASPARINI AND A. DEBCHAUDHURY 1980 Journal of the Engineering Mechanics


Division, 106, 1233-1248. Dynamic response to nonstationary nonwhite excitation.
14. J.M. KELLY AND J. L. SACKMAN 1979 Engineering Structures, 1(4), 179-190. Seismic
analysis of internal equipment and components in structures.
15. J. L. SACKMAN, A. DER KIUREGHIAN, AND B. NOUR-OMID 1983 Journal of the
Engineering Mechanics Division, 109, 73-89. Dynamic analysis of light equipment in
structures: modal properties of the combined system.
16. M. S. TRIANTAFYLLOU 1984 Shock and Vibration Digest 16, 9-17. Linear dynamics of
cables and chains.
17. W. K. LIU, T. BEL YTSCHKO, AND A. MAN I 1986 International Journal for Numerical
Methods in Engineering, 23, 1831-1845. Random Finite Elements.
18. M. SHINOZUKA AND G. DASGUPTA 1986 Proceedings of the 3rd Conference on Dynamic
Response of Structures, ASCE, Los Angeles, California, 44-54. Stochastic Finite
Element Methods in Dynamics.
19. C. PIERRE 1988 Journal of Sound and Vibration 126, 485-502. Mode localization and
eigenvalue loci veering phenomena in disordered structures.
ON RELIABILITY-BASED STRUCTURAL OPTIMIZATION

P. Thoft-Christensen
University of Aalborg
Sohngaardsholmsvej 57, DK-9000 Aalborg, Denmark

ABSTRACT
In this paper a brief presentation of the state-of-the-art of reliability-based structural optimization
(RBSO) is given. Special emphasis is put on problems related to application of RBSO on real (large)
structures. Shape optimization, knowledge-based optimization and optimal inspection strategies
are briefly discussed. A list of 125 references is included in the appendix.

1. INTRODUCTION
RBSO has been an area of research which has grown strongly in the last two decades. In figure 1
(based on the references in the appendix) the number of papers published since 1960 are shown for
5-year periods. From a very slow start in 1960 a drastic increase is seen in in the years 1985-1989.
A similar development has taken place for classical (deterministic) structural optimization. This
paper is highly inspired by the references in the appendix. However, for the cases of simplicity
reference is only made to a few papers, namely when results are taken directly from these papers.
The authors are asked for understanding for this point of view.

REFERENCES

NUMBER
80.--------------------------------------,

60

40

20

1960-64 1965-69 1970-74 1975-79 1980-84 1985-89 1990-94


YEAR

Figure 1. Number of references as a function of year (see appendix).

Why this growing interest in structural optimization? First of all structural optimization is an
efficient methodology for design. It is a general and versatile tool for automatic design and it is
relatively easy to use for practicing engineers. It should also be mentioned that although a number
of approximations have to be made the essential features of the original optimization problems are
maintained. Therefore, structural optimization techniques have a number of advantages compared
388

with traditional design techniques, but clearly the quality of an optimal design is only as good as
the underlying analysis.
Let Z = (Zl,'" , ZN) be N optimization variables, e.g. dimensions of structural elements. Then
an element reliability-based optimization problem can be formulated in the following way

min W(z)
s.t. f3i(Z) ~ m
zi :S Zi :S zi
nin i = 1,2, ... m
i=1,2, ... ,N
} (1)

m
where W is the object function. f3i is the reliability index for (failure) element i and nin the
corresponding minimum acceptable value. zi and zi are simple lower and upper values for the
design variable Zi, i = 1, ... , m.
The corresponding formulation on systems reliability level K, K = 1,2, ... , can be formulated in
the following way

min W(z")
s.t. f3K ~ f3'K in
zi :S Zi :S zi
} (2)

where f3k is the systems reliability index on level K and f3'K in the corresponding minimum acceptable
value.
Notice that the main difference between (1) - (2) and classical structural optimization is that in
classical structural optimization the constraints are related to e.g. stresses and displacements, but
in RBSO to element or system reliability. In (1) - (2) only a single objective function is used. In
a more general formulation multi-objective functions are introduced.
Optimal design problems can be classified at four optimization levels depending on the nature of
the design variables z:
Levell: Cross-sectional optimization
• Sizing design variables
Level 2: Sbape optimization
• Sizing design variables
• Shape design variables
Level 3: Configuration optimization
• Sizing design variables
• Shape design variables
• Configuration variables
Level 4: Total optimization
• Sizing design variables
• Shape design variables
• Configuration variables
• Material selection variables

In RBSO most of the work is at level 1. Some work is done at level 2 and a little work at level 3.
To the author's knowledge, no work is done at level 4.
389
2. OPTIMIZATION OF LARGE COMPLEX STRUCTURES
An overview of optimization oflarge complex structures is given by Jensen & Thoft-Christensen [1].
When standard methods for optimization of normal-size problems (moderate number of variables
and constraints) are used on large problems ( high number of variables and constraints) then
numerous complications will usually occur. First of all, the fact that a large problem is being
considered will result in analysis of a vast amount of data. This primarily causes problems with
lacking internal memory and this implies frequent swapping of information between internal and
external memory. As a result of these methods based on the Hessian may encounter serious
problems.
Standard optimization methods may also fail because of numerical problems, lack of convergence
to a "wrong" solution, cycling or program errors. In the worst case the calculation time when using
traditional methods will grow exponentially with the number of design variables. This in itself puts
considerable limits to the size of the problem that can be analysed no matter if a (deterministic)
problem or an RBSO problem is considered.
In the literature several methods for optimization of large problems are described. These methods
can be divided into five groups:
I direct methods with linear constraints,
II indirect methods with linear constraints,
III direct methods with general constraints,
IV indirect methods with general constraints,
V methods suited for parallel somputers.
In RBSO the groups III - V are most relevant, since the reliability constraints are strongly non-
linear. Direct methods handling general constraints are the most widely used in optimization
of structures. Methods based on penalty functions, extended penalty functions, sequential linear
programming, sequential quadratic programming, reduced gradient, projected gradient, augmented
Lagrangian, various Newton-type methods and feasible directions are only some of the types used to
optimize problems in structural engineering. All of these methods work well on small to moderate-
sized problems and some can be extended to optimization of special types of large structures.
The indirect method with general constraints involves decomposition of the problem into smaller
parts (substructures) at 2 or more levels (multilevel). Each of these subsystems must have their
own goals (objective functions) and constraints. The standard form of interconnection between
substructures is the top-down hierarchical form. This means that a given subsystem controls the
systems at the level below and is itself controlled by the system at the level above. Main consi-
derations using this approach should be concerned with information flow between substructures
and the coordination of the problems to ensure fulfilment of the overall goal. However, decompo-
sition into smaller subproblems that can be independently optimized is rarely possible in practical
engineering problems. Optimizing one subproblem without taking into consideration interaction
with other subproblems may lead to none or non-optimal solutions.
The most well-known indirect methods are
• the model coordination method (Wismer [2], Kirsch [3])
• the goal-coordination method (Wismer [2], Kirsch [3])
• the linear decomposition method (Sobieszczanski-Sobieski et. al [4], [5], [6]).
To the author's knowledge, none of these methods has been used in RBSO only in classical struc-
tural optimization. The last-mentioned method seems to be suitable for RBSO problems and some
research is being performed in this area and will be published soon by Jensen & Thoft-Christensen
390

[7]. This method is described in detail in [4], [5], [6] and [1]. The method can be compared to the
design and organisation of a large structure. A coordinator divides the design of the structure into
smaller problems and assigns the task to smaller groups. Each group solves its design problems
with their own tools, and sends the result back to the coordinator. He analyses the results, coordi-
nates them, and he may change some of the parameters and send the problems back for re-analysis.
This iterative scheme continues until an optinum is achieved. This decomposition clearly has a
number of advantages in the subdivision of the structure. However, the method can diverge. It is
not clear how this method will work with reliability constraints but as mentioned earlier, research
is being performed in this area.

3. SHAPE OPTIMIZATION
To illustrate RBSO at optimization level 2 (shape optimization) consider the simple model of
a mono-tower platform shown in figure 2. This example is taken from Enevoldsen, Sl'lrensen &
Thoft-Christensen [8].

Variable Designation Initial value Lower bound Upper bound


(m) (m) (m)
%1= t1 plate thickness 0.08 0.05 0.13
%2= t2 plate thickness 0.08 0.05 0.13
%3 = t3 plate thickness 0.08 0.05 0.13
%4 = d1 lower diameter 4.50 3.50 6.00
%5 = d2 upper diameter 2.00 1.00 4.00
%6 =P1 lower position -24.7 -33.7 -20.0
%7 = P2 upper position 3.00 -5.00 7.00

Figure 2. Mono-tower platform and design variables.

The RBSO problem (2) at systems level/{ = 1 is solved using (3min = 3.00. The objective function
W(z) is the steel volume between seabed and topside (initially W = 36.8 m 3 ). 11 stochastic
variables are used in the reliability modelling of the structure and two types of failure modes
(yielding failure and fatigue failure) are used, see Enevoldsen et al. [8] for details.
The shape optimzation problem is solved with three different reliability models with failure elements
in series

• Model 1 : 5 yielding failure elements corresponding to an extreme load case.


• Model 2 : 5 fatigue failure elements.
• Model 3 : 10 failure elements from models 1 and 2.
In all three cases the optimization problem is solved directly using the NLPQL and the VMCWD
algorithm. The results are shown in figure 3. Both algorithms gave the same optimal design
corresponding to models 1 and 2. However, the NLPQL algorithm did not converge when used
to solve the optimization problem corresponding to model 3 (several different starting points have
been used).
391

The optimal design corresponding to the three different reliability models is seen to be quite
different. The optimal design found with model 1 (extreme failure) has the lowest objective function
(steel volume). The optimal design corresponding to models 2 and 3 (fatigue failure and extreme
plus fatigue failure) has almost the same steel volumes indicating that fatigue failure is the most
significant failure mode.
The optimal shapes for models 1 and 2 are very different. The optimal design corresponding to
model 1 has the smallest diameter at the sea level, whereas the design corresponding to model
2 has the largest diameter at sea level. The reason is the different physical mechanisms in the
two failure modes. The optimal shape corresponding to model 3 has (as expected) the smallest
diameter at sea level. Although the fatigue failure mode is the most significant and the shape from
model 2 should be expected the diameter increases below sealevel due to the influence from the
extreme wave failure mode.
Compared with the initial structural design the steel volume is reduced by 12 % and the systems
reliability index is increased from 1.43 to 3.00.

t3

d2 P2
-----
P2 P2

t2 t2 t2

Variable Modell Model 2 Model 3

ZI = tl 0.051 m 0.050 m 0.050 m


Z2 = t2 0.050 m 0.061 m 0.057 m
Z3 = t3 0.050 m 0.050 m 0.050 m
Z4 = dl 3.50 m 3.50 m 6.00 m
Zs = d2 1.22 m 4.00 m 3.17 m
Z6 = PI -33.7 m -29.7 m -29.7 m
Z7 = P2 -2.87 m 3.01 m -0.82 m
W(z) 13.6 m 3 31.0 m 3 32.7 m 3

Figure 3. Optimization results for the 3 models.

4. KNOWLEDGE-BASED OPTIMIZATION
Automatic RBSO will probably in general not be possible if optimization is taking place at opti-
mization level 3 (configuration optimization). It seems to be much more reasonable to use some
kind of interactive system, where expert knowledge is used to improve the design. In this chapter
a very simple example is shown where expert knowledge is used in an extremely simple way. This
392

example is taken from Thoft-Christensen [9], where more details can be found, and is based on an
M.Sc. thesis by Frisk & Poulsen [10]. 10 design variables are considered, namely 6 shape variables
ZI, ••• ,Z6 and 4 sizing variables Z7, ... ,ZlO (see figure 4). The structure has 19 tubular members
and each of them has 3 failure elements (failure modes), namely a yield failure element at the ends
of the beams and a stability failure element.

71 m

Figure 4. Frame structure with 6 shape design variables zl, ••. ,Z6 and 4 sizing design variables
Z7, ••• ,ZI0·

The optimization problem is formulated on element level and with the weight as objective function:

19
mm W(z) = 7.85 L li(z) ai(Z)
i=1

s.t. (3j(Z) ~ 4.00 j = 1, ... ,38 yielding (3)


(3j(z) ~ 4.00 j = 39, ... ,57 stability
zi : :; Zk :::; zi: k = 1, ... ,10

The start value of W is 148.75 tons and the smallest reliability index for any failure element is
(3 = 5.42. Optimal values for z are obtained after 27 iterations with the NLPQL algorithm. The
minimum weight is W = 104.49 tons and the smallest (3-value is (3 = 4.00. This lowest acceptable
reliability index (3 = 4:00 is obtained for 7 stability failure elements. The shape of the structure
in the initial state (iteration 0) after 8 iterations, after 20 iterations, and the optimal shape (after
27 iterations) is shown in figure 5.
It is obvious from figure 5 that the optimal solution is not optimal from an economic point of view.
It is expensive to produce the 3 tubular joints in the symmetry line. This result is typical for shape
optimization of structures where the weight is used as an objective function. It is, however, not
expedient to reformulate the optimization problem so that the production costs of e.g. tubular
joints are included. Formulation of the objective function will namely in such a case be very
complicated. It seems to be much more natural to use expert knowledge in the way described
below. As a simple example of expert knowledge consider the brace in figure 6, where, depending
on the position of the joint, a K-brace or an X-brace is considered most economic.
393

Iteration 0 8 20 27
W 148.75 t 117.06 t 104.67 t 104.49 t
Smallest f3 5.42 3.92 3.97 4.00
Figure 5. Iteration history.

- 7\
- X
- ~
Figure 6. Optimal braces.

State

Weight (tons) 148,25 104.49 104.47 104,94


Smallest f3 5.42 4.00 3.43 4.00

State
Weight (tons) 106.05 110.41 110.39 112.70
Smallest f3 1.23 4.00 2.28 4.00

Figure 7. Shape optimization with application of expert knowledge.

The same optimization problem is considered again, but now the expert knowledge illustrated in
figure 6 is included. The result of the optimization is shown in figure 7. The states 1 and 2 are
identical with the initial state and the optimal state in figure 5.
In state 3 the middle brace is fixed as an X-brace. By continued iteration state 4 is then obtained.
next the lowest" brace is fixed as a K-brace (state 5) and by renewed optimization state 6 is obtained.
Finally, the upper brace is fixed as an X-brace (state 7) and by optimization state 8 is obtained.
394

In figure 8 the variation of the smallest ,8-index and the weight W during the iteration are shown.
In the optimal design (state 8) only 4 failure elements have ,8 = 4.00.
P W, tons
6 160

5
2 4 6 8

r([
4
3 6
2 7 8

5 iteration iteration
0 80+---+---+---+---+_ _+--_+-_
10 20 30 40 50 60 o 10 20 30 40 50 60
Figure 8. Iteration history.

5. OPTIMAL STRATEGIES FOR INSPECTION AND MAINTENANCE OF STRUC·


TURALSYSTEMS
A review of optimal reliability-based strategies for inspection and maintenance is given by Som-
mer & Thoft-Christensen [11]. Several strategies are described in [11]. In this paper a stragegy
originally develped by Thoft-Christensen & S9Jrensen [12], and later improved by several authors,
e.g. Madsen & S9Jrensen [13], is described. The design variables are the number m, the quality
Ii and the times of inspection t as well as some structural parameters z related to the individual
structural members. The objective function is the total expected cost in the lifetime T of the
structure, including the initial cost GI, inspection cost GIN, repair cost GR and the cost of failure
GF • The constraints are reliability-based and simple constraints. The optimization problems can
then (see e.g. [13]) be formulated in the following way

n 1
min
z,t,fj
GI + t;(GIN(q;)(I -- PF(Ti)) + GRE[Ri]) (1 + rV'

+ t;
n+l
GF(Ti)(PF(Ti) - PF(Ti-t} (1
1
+ rV'
s.t. ,8(T) ~ ,8lliin
(4)
Lti ::; t
n

t min ::; T - ffiaX


1=1

t min ::; t i ::; t max i = 1,2, ... ,n


qmin ::; qi ::; qmax i=I,2, ... ,n
i=I,2, ... ,m

qi is the inspection quality at inspection time Ti, i = 1, ... ,n. E[Ri] is the expected number of
repairs at the time Ti. PF(Ti) is the probability of failure at the time Ti and r is the inflation rate.
Extensive research in this area is being performed within the EC research programme BRITE. It
is believed that the optimal strategies for inspection and repair based on the formulation above
will result in substantial savings compared with traditional strate.e;ies.
395

6. REFERENCES

[1) Jensen, F. M. & P. Thoft-Christensen: Optimization of Large, Complex Structures. - An


Overview. University of Aalborg, Structural Reliability Theory Series, Paper no. 75, May
1990.
[2) Wismar, D. A.: Optimization Methods for Large-Scale Systems, with Applications. McGraw-
Hill, New York, 1971.
[3] Kirsch, U.: Multilevel Approach to Optimum Structural Design. Sixth Conference on
Electronic Computation, 1974, pp. 631-666.
[4) Sobieszczanski-Sobieski, J., B. James & M. F. Riley: Structural Sizing by Generalized
Multilevel Optimization. AIAA Journal, Vol. 25, 1987, pp. 139-145.
[5] Sobieszczanski-Sobieski, J.: A Linear Decomposition Method for Large Optimization Pro-
blems - Blueprint for Development. NASA, USA, Report TM-83248, 1982.
[6) Sobieszczanski-Sobieski, J., B. James & A. Dovi: Structural Optimization by Multilevel
Decomposition, NASA, USA, Report TM 84641, 1983.
[7) Jensen, F. M. & P. Thoft-Christensen: Reliability-Based Structural Optimization using
Linear Decomposition Technique. Not yet published.
[8) Enevoldsen, 1., J. D. S0rensen & P. Thoft-Christensen: Shape Optimization of Mono-Tower
Offshore Platform. Proc. OPTI89, Southampton, UK, 1989.
[9] Thoft-Christensen, P.: Application of Optimization Methods in Structural Systems Relia-
bility Theory. Proc. 13th IFIP Conf. on System Modelling and Optimization (eds. M. Iri
& K. Yajima), Springer-Verlag, 1988, pp. 484-497.
[10) Frisk, L. & P. Poulsen: Formoptimering med piiJidelighedssidebetingelser. M.Sc.-thesis,
University of Aalborg, Denmark, June 1989 (in Danish).
[11) Sommer, A. M. & P. Thoft-Christensen: Inspection and Maintenance of Marine Steel
Structures - State-of-the-Art. University of Aalborg, Structural Reliability Theory Series,
Paper No. 74, April 1990.
[12) Thoft-Christensen, P.: Optimal strategy for Inspection and Repair of Structural Systems.
Civil Engineering Systems, Vol. 4, 1987, pp. 94-100.
[13) Madsen, H. O. & J. D. S0rensen: Probability-Based Optimization of Fatigue Design,
Inspection and Maintenance. Proc. on Integrity of Offshore Structures, Glasgow, 1990.

APPENDIX: 125 References in Reliability-Based Structural Optimization

A ven, T.: Optimal Inspection and Replacement of a Coherent System. Microeletronics and
Reliability, Vol. 27, 1987, pp. 447-450.
Bourgund, U.: Reliability-Based Optimization of StruduT'l1 Sy,5tems. Springer Lecture Notes in
Engineering, Vol. 31, 1987, pp. 52-65.
Bourgund, U.: Structural Optimization Based on Advanced Reliability Analysis. Proceedings
Int. Conf. on Computer Aided Design of Structures: Applications (eds. C. A. Brebbia & S.
Hernandez). Computational Mechanics Publ., 1989, pp. 243-25l.
Brandt, A., S. Jendov & W. Marks: Probabilistic Approach to Reliability-Based Optimum
Design. Engineering Transactions, Polish Akademia NAUK, Vol. 32, 1984, pp. 57-74.
396

Brisighella, L., L. Simoni & P. Zaccaria: Optimum Elastoplastic Design under Environment.
ICASP 4, 1983, pp. 1261-1270.
Broding, W. C., F. W. Diederich & P. B. Parker: Structural Optimization and Design
Based on a Reliability Design Criterion. J. of Spacecraft, vol. 1, 1964, pp. 56-6l.
Burton, R. M. & G. T. Howard: Optimal System Reliability for a Mixed Series and Parallel
Structure. J. Math. Anal. and Appl., Vol. 28, 1969, pp. 370-382.
Carmichael, D. G.: Probabilistic Optimal Design of Framed Structures. Computer Aided De-
sign, Vol. 13, 1981, pp. 261-264.
Casciati, F. & L. Faravelli: Structural Reliability and Structural Design Optimization. ICOS-
SAR '85, Proceedings, III61-III70.
Cheng, F. Y. & C.-C. Chang: Optimality Criteria for Safety-Based Design. Proc. 5. ASCE-
EMD Specialty Conf. (eds. A. P. Boresi & K. P. Chong), Laramie, Wyoming, Vol. 1, 1984, pp.
54-57.
Cheng, F. Y. & C.-C. Chang: Optimum Design of Steel Buildings with Consideration of
Reliability. Proc. 4. Int. Conf. on Structural Safety and Reliability (eds. I. Ionishi, A. H.-S. Ang
& M. Shinozuka), Kobe, Japan, Vol. III, 1985, pp. 81-89.
Chong-Hong, H., S. Yasuyuki & I. Takuzo: Reliability of a Structure Using Change-
Constrained Programming. Bulletin of JSME, Vol. 21, 1978, pp. 37-43.
Corotis, R. B. & A. M. Nafday: Structural System Reliability Using Linear Programming and
Simulation. J. Struct. Eng., ASCE, Vol. 15, 1989.
Davidson, J. W., L. P. Felton & G. C. Hart: On Reliability-Based Structural Optimization
for Earthquakes. Computers & Structures, Vol. 12, 1980, pp. 99-105.
Davidson, J. W., L. P. Felton & G. C. Hart: Optimum Design of Structures with Random
Parameters. Computers & Structures, vol. 7, 1977, pp. 481-486.
Dufi'uaa, S. O. & A. Raouf: Mathematical Optimization Models for Multicharacteristic Repeat
Inspections. Appl. Math. Modelling, Vol. 13, 1988, pp. 408-412.
Ellis, J. H., R. B. Corotis & J. J. Zimmerman: Analysis of Structural System Reliability
with Chance Constrained Programming. Proceedings of Int. Conf. on Computer Aided Opti-
mum Design of Structures: Applications (eds. C. A. Brebbia & S. Hernandez). Computational
Mechanics Publ., 1989, pp. 253-263.
Enevoldsen, I., J. D. S~rensen & P. Thoft-Christensen: Shape Optimization of Mono-Tower
Offshore Platforms. Proc. Int. Conf. on Computer Aided Design of Structures: Applications (eds.
C. A. Brebbia & S. Hernandez). Computational Mechanics Publ., 1989, pp. 297-308.
Fagundo, F. E., M. Hoit & A. Soeiro: Probabilistic Design and Optimization of Reinforced
Concrete Frames. NATO ASI on Optimization and Decision Support Systems, Edinburgh, UK,
1989.
Feng, Y. S. & F. Moses: Optimum Design, Redundancy and Reliability of Structural Systems.
Computers & Structures, Vol. 24, 1986, pp. 239-25l.
Feng, Y. S. & F. Moses: A Method of Structural Optimization Based on Structural System
Reliability. J. Struct. Mech., Vol. 14, 1986, pp. 437-453.
Frangopol, D. M.: Alternative Solutions in Reliability-Based Optimum Design. Proc. 5. Eng.
Mech. Div. Specialty Conf., EM Div.jASCE, Lavania, Wyoming, 1984, pp. 1232-1236.
Frangopol, D. M.: Interactive Reliability-Based Structural Optimization. Computers & Struc-
tures, Vol. 19, 1984, pp. 559-563.
397

Frangopol, D. M.: Structural Optimization Using Reliability Concepts. J. of Struct. Eng.,


ASCE, Vol. 111, 1985, pp. 2288-230l.
Frangopol, D. M.: Computational Experience Gained in Structural Optimization Under Uncer-
tainty. Proc. Int. Conf. on Computational Mech. (eds. G. Yagawa & S. N. Alturi), Tokyo, Japan.
Springer-Verlag, 1986.
Frangopol, D. M.: Sensitivity of Reliability-Based Optimum Design. J. Structural Eng., Vol.
111,1985, pp. 1703-172l.
Frangopol, D. M.: Concepts and Methods in Reliability-Based Structural Optimization. Proc.
of Symp. Sponsored by ST Div., ASCE, Denver, May 1985, pp. 53-70.
Frangopol, D. M.: Computer-Automated Sensitivity Analysis in Reliability-Based Plastic De-
sign. Computers & Structures, Vol. 22, 1986, pp. 63-75.
Frangopol, D. M.: Structural Optimization Under Conditions of Uncertainty with Reference
to Serviceability and Ultimate Limit States. Proc. of "Structures '86", Recent Developments in
Structural Optimization. New Orleans, 1986, pp. 54-7l.
Frangopol, D. M.: Computer-Automated Design of Structural Systems under Reliability-Based
Performance Constraints. Eng. Comput., Vol. 3, 1986, pp. 109-115.
Frangopol, D. M.: Sensitivity Studies in Reliability-Based Analysis of Redundant Structures.
Structural Safety, Vol. 3, 1985, pp. 13-22.
Frangopol, D. M.: Multicriteria Reliability-Based Structural Optimization. Structural Safety,
Vol. 3, 1985, pp. 23-28.
Frangopol, D. M.: Reliability Analysis and Optimization of Plastic Structures. ICASP 4, 1983,
Pidagore Editrice 1983, pp. 1271-1288.
Frangopol, D. M. & J. Rosdal: Optimum Probability-Based Design of Plastic Structures. Eng.
Optimization, Vol. 3, 1977, pp. 17-25.
Frangopol, D. M. & R. Nakib: Reliability Analysis and Optimization of Redundant Systems.
Struct. Res. Rep., Dept. of Civil Eng., University of Colorado, Boulder, 1985.
Frangopol, D. M. & R. Nakib: Analysis and Optimum Design of Nondeterministic Structures
Under Random Loads. Proc. 9. Conf. on Electronic Computation, Birmingham, Alabama (ed.
K. M. Will), ASCE Publications, N. Y., 1986, pp. 483-493.
Fu, G. & D. M. Frangopol: Multicriterion Reliability-Based Optimization of Structural Sy-
stems. Proc. ASCE Specialty Conf. on Probabilistic Methods in Civil Engineering (ed. P. D.
Spanos), 1988, pp. 177-180.
Fujita, M., G. Schall & R. Rackwitz: Adaptive Reliability-Based Inspection Strategies for
Structures Subject to Fatigue. ICOSSAR'89, San Francisco, Cal., 1989.
Furikawa, K., H. Furuta & H. Ichikawa: Probabilistic Optimality Criteria Method Unifying
Stress and Deformation Requirement. Proc. ICOSSAR (eds. I. Konishi, A. H.-S. Ang & M.
Shinozuka), Kobe, Japan, Vol. III, 1985, pp. 91-100.
Furuta, H.: Fundamental Study on Geometrical Configuration and Reliability of Framed Struc-
tures Used for Bridges. Thesis, Dept. of Civil Eng., Kyoto University, Japan, 1980.
Ghista, D.: Structural Optimization with Probability of Failure Constraints. NASA TN-3777,
1966, pp. 1-15.
Heer, E. & J. N. Yang: Optimization of Structures Based on Fracture Mechanics and Reliability
Criteria. AAIAA J., Vol. 9, 1971.
398

Hilton, H. H. & M. Feigen: Minimum Weight Analysis Based on Structural Reliability. J.


Aerospace Sciences, Vol. 27, 1960, pp. 641-663.
Ishikawa, N. & M. Iizuka: Optimal Reliability-Based Design of Large Framed Structures.
Engineering Optimization, Vol. 10, 1987, pp. 245-26l.
Jozwiak, S. F.: Minimum Weight Design of Structures with Random parameters. Computers &
Structures, Vol. 23, 1986, pp. 481-485.
Kabala, R. E.: Design of Minimal- Weight Structures Given Reliability and Cost. J. Aerospace
Sciences, Vol. 29, 1962, pp. 355-356.
Kaio, N. & S. Osaki: Some Remarks on Optimum Inspection Policies. IEEE Trans. on Relia-
bility, Vol. R-33, No.4, 1984.
Kaio, N. & S. Osaki: Optimal Inspection Policy with Two Types of Imperfect Inspection
Probabilities. Microelectronics and Reliability, Vol. 26, 1986, pp. 935-942.
Keller, J. B.: Optimum Checking Schedules for Systems Subject to Random Failure. Manage-
ment Science, Vol. 21, 1974, pp. 256-260.
Khatessi, M. R.: Reliability-Based Optimization for Design. Ph. D. Thesis, University of
California, L. A., 1983.
Khatessi, M. & L. P. Felton: On Reliability-Based Structural Optimization for Earthquakes.
3. Conf. on Dynamic Response of Structures, ASCE, 1986, pp. 330-341.
Kim, S. H. & Y. K. Wen: Reliability-Based Structural Optimization under Stochastic Time-
Varying Loads. Proc. Joint ASMEjSES Appl. Mech. and Eng. Sciences Conf. on Computational
Probabilistic Methods (eds. W. K. Liu, T. Belytschko, M. Lawrence & T. Cruse), Berkeley, 1988,
pp. 49-60.
Kiner, D. E.: Elastic Minimum Weight Design with a Probability of Failure Constraint. Ph. D.
Thesis, Case Inst. of Technology, Cleveland, 1966.
Kishi, M., Y. Muroysu, H. Okada & K. Taguchi: Probabilistically Optimum Design of
Offshore Platforms Considering Maintenance Costs. ICOSSAR'85, Proceedings, 1985, pp. II557-
II563.
Kwak, B. M. & E. J. Hang: Optimum Design in the Presence of Parametric Uncertainty. J.
Optim. Theory Appl., Vol. 19, 1976, pp. 527-545.
Kwak, B. M. & T. W. Lee: Sensitivity Analysis for Reliability-Based Optimization Using an
AFOSM Method. Computers & Structures, Vol. 27, 1987, pp. 399-406.
Lawrence, M.: Probability-Based Tools for Interactive Computer-Aided Design. Proc. Joint
ASMEjSES Appl. Mech. and Eng. Sciences. Conf. on Computational Probabilistic Methods,
Berkeley, Cal. (eds. W. K. Liu, T. Belytschko, M. Lawrence & T. Cruse), 1988, pp. 37-48.
Lee, S. J. & P. H. Wirsching: Reliability-Based Optimal Structural and Mechanical Design.
Proc. ASCE Specialty Conf. on Probabilistic Methods in Civil Engineering (ed. P. D. Spanos),
1988, pp. 348-355.
Lee, T. W. & B. M. Kwak: A Reliability-Based Optimal Design Using Advanced First Order
Second Moment Method. Mech. Struct. & Mach., Vol. 15, 1987, pp. 523-542.
Liu, W. K., G. Besterfield, M. Lawrence & T. Belytschko: Kuhn-Thcker Optimization
Based Reliability Analysis for Probabilistic Finite Elements. Proc. Joint ASMEjSES Appl. Mech.
and Eng. Sciences Conf. on Computational Probabilistic Methods, Berkeley, Cal., 1988, pp.
135-149.
399

Madsen, H. 0., J. D. S~rensen & R. Olesen: Optimal Inspection Planning for Fatigue
Damage of Offshore Structures. ICOSSAR'89, San Francisco, 1989.
Madsen, H. O. & J. D. S~rensen: Probability-Based Optimization of Fatigue Design, Inspec-
tion and Maintenance. Proc. on Integrity of Offshore Structures, Glasgow, 1990.
Mahadevan, S.: Stochastic Finite Element-Based Structural Reliability Analysis and Optimiza-
tion. Ph. D. Thesis, Georgia Institute of Technology, Atlanta, 1988.
Mahadevan, S. & A. Haldar: Stochastic Finite Element-Based Optimum Design of Large
Structures. Proceedings of Int. Conf. on Computer Aided Design of Structures: Applications
(eds. C. A. Brebbia & S. Hernandez). Computational Mechanics Publ., 1989, pp. 265-274.
Mahadevan, S. & A. Haldar: Efficient Algorithm for Stochastic Structural Optimization. J.
Struct. Engineering, Vol. 115, 1989, pp. 1579-1598.
Mjelde, K. M.: Inspection-Optimization for Serial Systems of Dependent Elements. Structural
Safety, Vol. 2, 1984, pp. 119-125.
Mohammadi, J.: Seismic Safety of Lifelines - An Optimum Design Method. Structural Safety,
Vol. 2, 1985, pp. 301-308.
Moses, F.: Approaches to Structural Reliability and Optimization. An Int. to Structural Opti-
mization (ed. M. Z. Cohn), Solid Mechanics Div., University of Waterloo, 1969, pp. 81-120.
Moses, F.: Sensitivity Studies in Structural Reliability. Structural Reliability and Codified Design
(ed. N. C. Lind). Solid Mech. Div., University of Waterloo, 1970, pp. 1-17.
Moses, F., R. Fox & G. Goble: Mathematical Programming Applications in Structural Design.
SM Study, 1970, pp. 379-39l.
Moses, F.: Optimization with Reliability Constraints. Automated Design and Optimization,
Trondheim Press, 1972.
Moses, F. & D. E. Kinser: Optimum Structural Design with Failure Probability Constraints.
AIAA Journal, Vol. 5, 1967, pp. 1152-1158.
Moses, F. & J. D. Stevenson: Reliability-Based Structural Design. J. Struct. Div., ASCE,
Vol. 96, 1970, pp. 221-244.
Moses, F.: Structural System Reliability and Optimization. Computers & Structures, Vol. 7,
1977, pp. 283-290.
Munford, A. G. & A. K. Shahani: A Nearly Optimal Inspection Policy. Operational Research
Quarterly, Vol. 23, 1972, pp. 373-379.
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: A Method for Reliability Analysis and
Optimal Design of Structural Systems. Proc. 12. Int. Symp. on Space Technology and Science,
1977, pp. 1047-1054.
Murotsu, Y., M. Kishi, H. Okada, M. Yonezawa & K. Taguchi: Probabilistic Optimum
Design of Plane Structures. Proc. IFIP Conf. on System Modelling and Optimization (ed. P.
Thoft-Christensen). Lecture Notes in Control and Information Sciences, Vol. 59, 1984, pp. 545-
554.
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: Optimum Structural Design Based on
Extended Reliability Theory. Proc. 11fh Congress of Int. Council of the Aeronautical Sciences,
Lisbon, 1978-79, pp. 572-58l.
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: Optimum Structural Design under Con-
straint on Failure Probability. ASME Publication 79-DET-114, 1979, pp. 1-12.
400

Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: Optimum Design Problems in Reliability-
Based Structural Design. HOPE Int. JSME Symp., Tokyo, Oct./Nov. 1977, pp. 461-466.
Murthy, P. N. & G. Subrimanian: Minimum Weight Analysis Based on Structural Reliability.
AIAA Journal, Vol. 6, 1968, pp. 2037-2038.
Nakagawa, T. & K. Yasui: Approximate Calculation of Optimal Inspection Times. J. Opera-
tional Research Society, Vol. 31, 1980.
Nakib, R. & D. M. Frangopol: Reliability-Based Analysis and Optimization of Ductile Struc-
tural Systems. Structural Research Series 8501, Dept. Civil, Env. and Archit. Eng., University of
Colorado, Boulder, 1985.
Nikolaidis, E. & R. Burdisso: Reliability-Based Optimization: A Safety Index Approach.
Computers & Structures, Vol. 28,1988, pp. 781-788.
Ohnishi, M., H. Kawai & H. Mine: An Optimal Inspection and Replacement Policy for a
Deteriorating System. J. Applied Probability, Vol. 23, 1986, pp. 973-988.
Parimi, S. R. & M. Z. Cohn: Optimal Criteria in Probabilistic Structural Design. Optimization
in Structural Design (eds. A. Sawczuk & Z. Mroz), Springer-Verlag, 1975, pp. 278-293.
Parimi, S. R. & M. Z. Cohn: Optimal Solutions in Probabilistic Structural Design. Part I:
Theory. Part II: Applications. J. de Mecanique Appliquee, Vol. 2, 1978, pp. 47-92.
Rackwitz, R.: Optimization of Measures for Quality Assurance. IABSE Symposium, Tokyo
1986, IABSE Report, Vol. 51, pp. 91-100.
Rashed, R. & F. Moses: Application of Linear Programming to Structural System Reliability.
Computers & Structures, Vol. 24, 1986, pp. 375-384.
Rojioni, K. B. & G. L. Bailey: A Comparison of Reliability-Based Optimum Design and IASC
Code Based Design. Proc. Int. Symp. on Optimal Structural Design, Tucson, Ariz., 1981.
Rosenblueth, E.: Reliability-Based Optimum Design of Offshore Platforms. Int. J. of Prob. and
Statistics in Eng. Research and Development. M. Dekker Inc., Vol. 1, 1983.
Rosenblueth, E. & E. Mendoza: Reliability Optimization in Isostatic Structures. J. Eng.
Mech. Div., ASCE, Vol. 97, 1971, pp. 1625-1640.
Schueller, G. I.: Reliability-Based Optimum Design of Offshore Platforms. Int. J. of Prob. and
Statistics in Eng. Research and Development. M. Dekker Inc., Vol. 1, 1983.
Sherif, Y. S. & M. L. Smith: Optimal Maintenance Models for Systems Subject to Failure - A
Review. Naval Research Logistics Quarterly, USA, Vol. 28,1981, pp. 47-74.
Shinozuka, M. & T. N. Yang: Optimum Structural Design Based on Reliability and Proof-Load
Testing. NASA Tech. Rept. 1032-1042, 1969.
Shinozuka, M., J. N. Yong & E. Heer: Optimal Structural Design Based on Reliability
Analysis. 8 Int. Conf. on Space Science and Tech., Japan, 1969.
Shiraishi, N. & H. Furuta: Safety Analysis and Minimum Weight Design of Rigid Frames Based
on Reliability Concepts. Memoirs of the Faculty of Engineering, Kyoto University, Vol. XLI, Part
4, 1979, pp. 474-479.
Simoes, L. M. C.: Reliability-Based Plastic Synthesis of Reinforced Concrete Slabs. Proc. Int.
Conf. on Computer Aided Optimum Design of Structures: Applications (eds. C. A. Brebbia & S.
Hernandez), 1989, pp. 285-295.
401

Simoes, L. M. C.: Reliability-Based Plastic Synthesis of Portal .Frames. NATO ASI on Optimi-
zation and Decision Support Systems, Edinburgh, UK, 1989.
Skjong, R.: Reliability-Based Optimization of Inspection Strategies. Proc. ICOSSAR'85, 1985,
pp. III614-III618.
Soltani, M. & R. B. Corotis: Failure Cost Design of Structural Systems. Structural Safety,
Vol. 5, 1988, pp. 239-252.
Stevenson, J. D.: Reliability Analysis and Optimum Design of Structural Systems with Appli-
cations to Rigid .Frames. Rep. No. 14, Structures and Mech. Design Div., Case Western Reserve
University, Cleveland, 1967.
Surahman, A. & K. B. Rojiani: Reliability-Based Optimum Design of Concrete .Frames. J.
Structural Eng., ASCE, Vol. 109, 1981, pp. 741-757.
Switzky, H.: Minimum Weight Design with Structural Reliability. Proc. 5 Annual Structures
and Materials Conf., Am. Inst. of Aeronautics and Astronautics, Palm Springs, CaL, 1964.
Switsky, H.: Minimum Weight with Structural Reliability. J. of Aircraft, Vol. 2, 1965, pp.
228-232.
S!ZIrensen, J. D.: PRADSS: Program for Reliability Analysis and Design of Structural Systems.
Structural Reliability Theory, Paper No. 36, The University of Aalborg, Denmark, 1988.
S!ZIrensen, J. D.: Probabilistic Design of Offshore Structural Systems. Proc. ASCE Specialty
Conf. on Probabilistic Methods in Civil Engineering (ed. P. D. Spanos), 1988, pp. 189-192.
S!ZIrensen, J. D.: Reliability-Based Optimization of Structural Elements. Structural Reliability
Theory, Paper No. 18, The University of Aalborg, Denmark, 1986.
S!ZIrensen, J. D.: Reliability-Based Optimization of Structural Systems. 13th IFIP Conference
on "System Modelling and Optimization", Tokyo, Japan, 1987.
S!ZIrensen, J. D.: Optimal Design with Reliability Constraints. Structural Reliability Theory,
Paper No. 45, The University of Aalborg, Denmark, 1988.
S!ZIrensen, J. D. & I. Enevoldsen: Sensitivity Analysis in Reliability-Based Shape Optimization.
NATO ASIan Optimization and Decision Support Systems, Edinburgh, UK, 1989.
S!ZIrensen, J. D. & P. Thoft-Christensen: Reliability-Based Optimization of Parallel Systems.
Proc. 14th IFIP TC-7 Conf. on System Modelling and Optimization, Leipzig, DDR, July 1989.
S!ZIrensen, J. D. & P. Thoft-Christensen: Integrated Reliability-Based Optimal Design of
Structures. Proc. IFIP Conf. on Reliability and Optimization of Structural Systems (ed. P.
Thoft-Christensen). Lecture Notes in Engineering, Vol. 33,1987. Springer-Verlag, pp. 385-398.
S!ZIrensen, J. D. & P. Thoft-Christensen: Inspection Strategies for Concrete Bridges. Proc.
IFIP Conf. on Reliability and Optimization of Structural Systems '88 (ed. P. Thoft-Christensen).
Lecture Notes in Engineering, Vol. 48, Springer-Verlag, 1989, pp. 325-335.
S!ZIrensen, J. D. & P. Thoft-Christensen: Structural Optimization with Reliability Con-
straints. Proc. IFIP Conf. on System Modelling and Optimization (eds. A. Prekopa, J. Szelezsan
& B. Strazicky). Lecture Notes in Engineering, Vol. 84, Springer-Verlag, 1986, pp. 876-885.
Thoft-Christensen, P.: Applications of Structural Systems Reliability Theory in Offshore En-
gineering. State-of-the-Art. Proc. Int. Symp. on Integrity of Offshore Structures, Glasgow, UK.
Elsevier Applied Science, 1987, pp. 1-23.
Thoft-Christensen, P.: Application of Optimization Methods in Structural Systems Reliability
Theory. Proc. IFIP Conf. on System Modelling and Optimization (eds. M. Iri & K. Yajimi).
Lecture Notes in Control and Information Sciences, Vol. 113, Springer-Verlag, 1988, pp. 484-497.
402

Thoft-Christensen, P. & J. D. S~rensen: Optimization and Reliability of Structural Systems.


NATO ASI on Computational Math. Programming, Bad Windsheim, FRG, July/August 1984.
Thoft-Christensen, P. & J. D. S~rensen: Reliability Analysis of Tubular Joints in Offshore
Structures. Reliability Engineering, Vol. 19, 1987, pp. 171-184.
Thoft-Christensen, P. & J. D. S~rensen: Optimal Strategy for Inspection and Repair of
Structural Systems. Civ. Eng. Syst., Vol. 4, 1987, pp. 94-100.
Thoft-Christensen, P. & J. D. S~rensen: Recent Advances in Optimal Design of Structures
from a Reliability Point of View. Quality & Reliability Management, Vol. 4, 1987, pp. 19-31.
Thoft-Christensen, P. & Y. Murotsu: Application of Structural Systems Reliability Theory.
Springer-Verlag, 1986.
Vanmarcke, E. H., J. Dicz-Padilla & D. A. Roth: Reliability-Based Optimum Design of
Simple Plastic Frames. R72-46, Dept. of Civil Eng., MIT, Cambridge, Mass., 1972.
Yasui, K., Nakagawa & S. Osaki: A Summary of Optimum Replacement Policies for a Parallel
Redundant System. Microelectronics and Reliability, Vol. 28, 1988, pp. 635-641.
Yonezawa, M., Y. Murotsu, F. Oba & K. Niwa: Optimum Reliability and Structure in
Reliability-Based Structural Design. Archives of Mechanics, Vol. 30, 1978, pp. 227-241.
INDEX OF AUTHORS

Abdo, T. 1 Liu, P.- L. 223


Arnbjerg-Nielsen, T. 13 L(Ilseth, R. 349
Augusti, G. 23 Madsen, H. O. 185
Ayyub, B. M. 33 Mahadevan, S. 241
Bell-Wright, T. F. 33 Manners, W. 251
Borri, A. 23 Melchers, R. E. 261
Breitung, K. 43 Millwater, R. E. 63
Corotis, R. B. 359 Moses, F. 113
Costa, F. Vasco 53 Murayama., O. 161,173
Cruse, T. A. 63 Murotsu, Y. 273
Der Kiureghian, A. 161,211 Noguchi, H. 161
Dias, J. B. 63 Nowak, A. S. 287
Ditlevsen, O. 13 Pedersen, P. Terndrup 185
Eliopoulos, D. 339 Quek, S. T. 273
Ellis, J. H. 359 Rackwitz, R. 1
Enevoldsen, I. 75 Ronold, K. O. 303
Esteva, L. 89 Rosenblueth, E. 315
Foutch, D. A. 339 Shoo, S. 273
Frangopol, D. M. 99 Shiraishi, N. 129
Fu, G. 113 Sigurdsson, G. 75
Furuta, H. 129, 197 Speranzini, E. 23
Gierlinski, J. T. 139 Sugito, M. 129
Grigoriu, M. 147 S(Ilrensen, J. D. 75, 139,329
Haldar, A. 241 Tabsh, S. W. 287
Harren, S. V. 63 Tanaka, H. 197
Hisada, T. 161 Thoft-Christensen, P. 329,387
Holnicki-Szulc, J. 139 Wen, Y. K. 339
Hoshiya, M. 173 White, G. J. 33
Igusa, T. 369 Winterstein, S. 349
Jara, J. M. 315 Yamamoto, S. 129
Jensen, J. J. 185 Yingwei, L. 113
Kamei, M. 197 Yu, C. - Y. 339
Kaneyoshi, M. 197 Zimmermann, J. J. 359
Klisinski, M. 99
SUBJECT INDEX

acceptable probability of failure 54 highway bridge structure 116


analysis of uncertainty 251 inspection 394
asymptotics 46 jack-up platform 185, 303
autoregressive process model 173 Kalman filter 173
Bayesian analysis 211 knowledge-based optimization 391
Bayesian parameter estimation 241 large complex structures 389
beta-point algorithm 1 life expectancy assessment 33
bridge load models 288 likelihood functions 214
bridge resistance models 291 linear decomposition method 389
bridge, composite steel girder 287 long-term reliability 303
cable-stayed bridge 202 maintenance 394
calibration 89 maintenance of existing structures 116
calibration of models with relality 95 minimax method 102
calibration of reliabilities 93 model coordination method 389
classification of uncertainty 251 model uncertainty 21
confIguration optimization 388 models of structural behaviour 255
corbel 81 mono-tower platform 390
cross-sectional optimization 388 moment-spring coupled beam 378
damage analysis 341 moving load 71
damage tolerability 115 natural variability 254
Daniels systems 147 NESSUS code 63
deterministic optimization 140 non-linear forces 349
directional simulation 261 non-linear wave force model 185
dominant failure paths 274 optimal allocation of resources 23
dynamic response 349 optimal risk reduction 27
earthquake loads 129 optimal strategies 394
earthquake statistics 319 optimum cable tension adjustment 197
elasticity 44 parameter sensitivity 43
elasto-plastic analysis 63 Pareto optimal solution 31
elasto-plastic dynamic problems 161 permanent displacements 329
epsilon-constraint method 102 pier 135
existing structures 129 plane frame 24, 90, 169, 245, 278, 363, 392
expectation ratio 53 plastic system reliability 13
failure modes, generation 273 plates 34, 85, 234
fatigue failure 39 power spectrum 130
fibers 155 random fields 225
fixed-ended beam 231 redundancy 115
foundation stability 306 reliability of buildings 23
foundation, jackup-platform 303 reliability, lower bound 14
fuzzy regression analysis 197 reliability-based optimization 141, 241, 387,
goal coordination method 389 395
gradients of dynamic systems 163 reliability-based optimization, references 395
gradients of static systems 162 response analysis 341
ground motion, modeling 340 response statistics 355
406

rigid-ideal plasticity 13 storm loading 303


risk reduction 25 structural system reliability 113
seismic design coefficients 315 the simple method 334
seismic loading 339 thick-walled cylinder 70
seismic reliability analysis 89, 133 time-dependent reliability problems 261
seismic response model 130 time-invariant problems 1
seismometer array 177 time-variant problems 1
sensitivity 43, 78, 139, 287 total optimization 388
shape optimization 75, 388, 390 transient analysis 63
significant collapse modes 359 truss structure 165
simulation 330 two-mode systems 371
simulation, directional 261 utility 56, 316
size effect, random field elements 223 VDM 142
soil-structure interaction 306 vector optimization 99
sources of uncertainty 213, 253 vector optimization, formulation 100
space frame 18 vector optimization, solution 100
space truss 104 virtual distortion method 139
spring-supported beam 381 wave force models 350
steel buildings, reliability 339 weighting method 101
stochastic differential equations 331 wide-band excitation 369
stochastic FEM 64, 75, 241
Lecture Notes in Engineering
Edited by C.A. Brebbia and S.A. Orszag

Vol. 40: R. Borghi, S. N. B. Murhty (Eds.) Vol. 49: J. P. Boyd


Turbulent Reactive Flows Chebyshev & Fourier Spectral Methods
VIII, 950 pages. 1989 XVI, 798 pages. 1989

Vol. 41: W.J.Lick Vol. 50: L. Chibani


Difference Equations Optimum Design of Structures
from Differential Equations VIII, 154 pages. 1989
X, 282 pages. 1989
Vol. 51: G. Karami
Vol. 42: H. A. Eschenauer, G. Thierauf (Eds.) A Boundary Element Method for
Discretization Methods Two-Dimensional Contact Problems
and Structural Optimization - VII, 243 pages. 1989
Procedures and Applications
Proceedings of a GAMM-Seminar Vol. 52: Y. S. Jiang
October 5-7,1988, Siegen, FRG Slope Analysis Using
XV, 360 pages. 1989 Boundary Elements
IV, 176 pages. 1989
Vol. 43: C. C. Chao, S. A. Orszag, W. Shyy (Eds.)
Recent Advances in Computational Vol. 53: A. S. Jovanovic,
Fluid Dynamics K. F. Kussmaul, A. C. Lucia,
Proceedings of the US/ROC (Taiwan) Joint· P. P. Bonissone (Eds.)
Workshop in Recent Advances in Expert Systems in Structural
Computational Fluid Dynamics Safety Assessment
V, 529 pages. 1989 X, 493 pages. 1989

Vol. 44: R. S. Edgar Vol. 54: T. J. Mueller (Ed.)


Field Analysis and Low Reynolds Number
Potential Theory Aerodynamics
XII, 696 pages. 1989 V, 446 pages. 1989

Vol. 45: M. Gad-el-Hak (Ed.) Vol. 55: K. Kitagawa


Advances in Fluid Mechanics Boundary Element Analysis
Measurements of Viscous Flow
VII, 606 pages. 1989 VII, 136 pages. 1990

Vol. 46: M. Gad-el-Hak (Ed.) Vol. 56: A. A. Aldama


Frontiers in Experimental Filtering Techniques for
Fluid Mechanics Turbulent Flow Simulation
VI, 532 pages. 1989 VIII, 397 pages. 1990

Vol. 47: H. W. Bergmann (Ed.) Vol. 57: M. G. Donley, P. D. Spanos


Optimization: Methods and Applications, Dynamic Analysis of Non-Linear
Possibilities and Limitations Structures by the Method of
Proceedings of an International Seminar Statistical Quadratization
Organized by Deutsche Forschungsanstalt fUr VII, 186 pages. 1990
Luft- und Raumfahrt (DLR), Bonn, June 1989
IV, 155 pages. 1989 Vol. 58: S. Naomis, P. C. M. Lau
Computational Tensor Analysis
Vol. 48: P. Thoft-Christensen (Ed.) of Shell Structures
Reliability and Optimization XII, 304 pages. 1990
of Structural Systems '88
Proceedings of the 2nd IFIP WG 7.5 Conference
London, UK, September 26-28, 1988
VII, 434 pages. 1989

For information about Vols. 1-39 please contact your bookseller or Springer-Verlag.

You might also like