Professional Documents
Culture Documents
Engineering: Lecture Notes in
Engineering: Lecture Notes in
Engineering
Edited by C. A. Brebbia and S. A. Orszag
i'g~
~.
IFIP 61
.--------------------------------....
A. Der Kiureghian,
P. Thoft-Christensen (Eds.)
"
I'.:
-
Springer-Verlag
Berlin Heidelberg New York London
. Paris Tokyo Hong Kong Barcelona
Series Editors
C. A. Brebbia . S. A. Orszag
Consulting Editors
J. Argyris . K-J. Bathe· A. S. Cakmak . J. Connor· R. McCrory
C. S. Desai . K. -Po Holz . F. A. Leckie . G. Pinder· A. R. S. Pont
J. H. Seinfeld . P. Silvester· P. Spanos' W. Wunderlich . S. Yip
Editors
A. Der Kiureghian
University of California
Dept. of Civil Engineering
721B Davis Hall
Berkeley, California 94720
USA
P. Thoft-Christensen
The University of Aalborg
Institute of Building Technology
and Structural Engineering
Sohngaardsholmsvej 57
9000 Aalborg
Denmark
ISBN-13:978-3-540-53450-1 e-ISBN-13:978-3-642-84362-4
001: 10.1007/978-3-642-84362-4
This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, re'use of illustrations, recitation,
broadcasting, reproduction on microfilms or in other ways, and storage in data banks. Duplication
of this publication or parts thereof is only permitted under the provisions of the German Copyright
Law of September 9, 1965, in its version of June 24, 1985, and a copyright fee must always be paid.
Violations fall under the prosecution act of the German Copyright La,w.
© Intemational Federation for Information Processing, Geneva, Switzerland. 1991
The use of registered names, trademarks, etc. in this publication does not imply, even in the
absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
This proceedings volume contains 33 papers presented at the 3rd Working Conference on "Rel-
iability and Optimization of Structural Systems", held at the University of California, Berkeley,
California, USA, March 26 - 28, 1990. The Working Conference was organised by the IFIP (Inter-
national Federation for Information Processing) Working Group 7.5 of Technical Committee 7 and
was the third in a series, following similar conferences held at the University of Aalborg, Denmark,
May 1987 and at the Imperial College, London, UK, September 1988. The Working Conference
was attended by 48 participants from 12 countries.
The objectives of Working Group 7.5 are:
• to promote modern structural systems optimization and reliability theory,
• to advance international cooperation in the field of structural system optimization and reliability
theory,
• to stimulate research, development and application of structural system optimization and reli-
ability theory,
• to further the dissemination and exchange of information on reliability and optimization of
structural systems
• to encourage education in structural system optimization and reliability theory.
At present the members of the Working Group are:
The Working Conference received financial supported from IFIP, University of California at Ber-
keley, and University of Aalborg.
On behalf of WG 7.5 and TC-7 the co-chairmen of the Conference would like to express their
sincere thanks to the sponsors, to the members of the Organizing Committee for their valuable
assistance, and to the authors for their contributions to these proceedings. Special thanks are due
to Mrs. Kirsten Aakj~r, University of Aalborg, for her efficient work as conference secretary, and
to Ms. Gloria Partee, University of California at Berkeley, for her valuable assistance in carrying
out local organizational matters.
June 1990
Short Presentations
A New Beta-Point Algorithm for Large Time-Invariant and Time-Variant Reliability
ProbleIllS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T. Abdo, R. Rackwitz
Rigid-Ideal Plastic System Reliability . . . . . . . . . . . . . . . . . . . . . . . 13
Torben Arnbjerg-Nielsen, Ove Ditlevsen
Optimal Allocation of Available Resources to Improve the Reliability of Building Systems 23
Giuliano Augusti, Antonio Borri, Emanuela Speranzini
Life Expectancy Assessment of Structural Systems . . . . 33
Bilal M. Ayyub, Gregory J. White, Thomas F. Bell-Wright
Parameter Sensitivity of Failure Probabilities. 43
Karl Breitung
Expectation Ratio Versus Probability . . . . 53
F. Vasco Costa
Application of Probabilistic Structural Modelling to Elastoplastic and Transient Analysis 63
T. A. Cruse, H. R. Millwater, S. V. Harren, J. B. Dias
Reliability-Based Shape Optimization Using Stochastic Finite Element Methods 75
Ib Enevoldsen, J. D. S{lJrensen, G. Sigurdsson
Calibration of Seismic Reliability Models. . . . . . . . . . . . . . . . . . 89
Luis Esteva
Computational Experience with Vector Optimization Techniques for Structural Systems 99
Dan M. Frangopol, Marek Klisinski
Management of Structural System Reliability. . . . . . . . . 113
Gongkang FU, Liu Yingwei, Fred Moses
Reliability Analysis of Existing Structures for Earthquake Loads 129
Hitoshi Furuta, Masata Sugito, Shin-ya Yamamoto, Naruhito Shiraishi
Sensitivity Analysis of Structures by Virtual Distortion Method. . . . 139
J. T. Gierlinski, J. Holnicki-Szulc, J. D. S{lJrensen
Reliability of Daniels Systems with Local Load Sharing Subject to Random Time
Dependent Inputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Mircea Grigoriu
Reliability Analysis of Elasto-Plastic Dynamic Problems. . . . . . . . . . 161
Toshiaki Hisada, Hirohisa Noguchi, Osamu Murayama, Armen Der Kiureghian
Identification of Autoregressive Process Model by the Extended Kalman Filter 173
Masaru Hoshiya, Osamu Maruyama
VI
The Effect of a Non-Linear Wave Force Model on the Reliability of a Jack-Up Platform 185
J. Juncher Jensen, Henrik O. Madsen, P. Terndrup Pedersen
Optimum Cable Tension Adjustment Using Fuzzy Regression Analysis . 197
Masakatsu Kaneyoshi, Hiroshi Tanaka, Masahiro Kamei, Hitoshi Furuta
Bayesian Analysis of Model Uncertainty in Structural Reliability . . . 211
Armen Der Kiureghian
Size Effect of Random Field Elements on Finite-Element Reliability Methods. 223
Pei-Ling Liu
Reliability-Based Optimization Using SFEM . . . . . . . . . 241
Sankaran Mahadevan, Achintya Haldar
Classification and Analysis of Uncertainty in Structural Systems 251
William Manners
Directional Simulation for Time-Dependent Reliability Problems 261
Robert E. Melchers
Some Studies on Automatic Generation of Structural Failure Modes. 273
Yoshisada Murotsu, Shaowen Shao, Ser Tong Quek
Sensitivity Analysis for Composite Steel Girder Bridges 287
Andrzej S. Nowak, Sami W. Tabsh
Long-Term Reliability of a Jackup-Platform Foundation 303
Knut O. Ronold
Constant Versus Time Dependent Seismic Design Coefficients. 315
Emilio Rosenblueth, Jose Manuel Jara
Reliability of Structural Systems with Regard to Permanent Displacements. 329
J. D.S~rensen, P. Thoft-Christensen
Lectures
Critical Configurations of Systems Subjected to Wide-Band Excitation 369
Takeru Igusa
On Reliability-Based Optimal Design of Structures 387
P. Thoft-Christensen
Index of authors 403
Subject index 405
A NEW BETA-POINT ALGORITHM FOR LARGE TIME-INVARIANT
AND TIME-VARIANT RELIABILITY PROBLEMS
Introduction
Several methods exist to compute the probability integrals occurring in structural reliability. These
integrals have the general form
I(V) = f
v
h(x) f(x) dx (1)
where V is the integration domain. X the vector of uncertain basic variables. h(x) a certain smooth
function and f(x) the (continuous) probability density of X. In the general case V is given by
V = n~ Vi with Vi = {gi(X) ~ O} as individual failure domains. The state functions gi(X) are
assumed to be (locally) twice differentiable. This integral includes as special cases simple probability
integrals. integrals for the mean number of excursions in random process or random field theory and
certain expectations. In many cases there is h(x) = 1 and m = 1. Because numerical integration fails
to be practicable except in very low dimensions of X approximate methods have been developed the
most important ones are based on the theory of asymptotic Laplace integrals (Breitung. 1984;
Hohenbichler et al.. 1987). According to that theory a critical point x' in V needs to be located where
the integrand function f(x) becomes maximal and where the boundary of the failure domain can be
expanded into first or second order Taylor series so that integration is analytic. So far computations
are mostly performed in the so-called standard space i.e. the original vector X is transformed by an
appropriate probability distribution transformation into a standard normal vector with independent
components. Proposals have also been put forward which do not need this probability distribution
transformation (Breitung. 1989). It appears. however. that this advantage does only rarely compensate
for certain difficulties which. apparently. must mainly be regarded as problems of scaling in the
numerical analysis. In the following we discuss primarily formulations in the standard space. A similar
discussion for original space formulations will be presented in a separate paper. In any case integration
is reduced to a search for the critical point and some simple algebra. Here. we will be concerned only
with the search for the critical point.
The first important step towards an accurate and efficient failure probability calculation was made by
Hasofer/Lind (1974). Their search algorithm was a simple gradient algorithm. Later it was modified
2
and made globally convergent by Rackwitz/FieBler (1978). The most important modification consisted
in truncating the step length as the number of iterations increased according to Psenicnyi (1970).
Another improvement consisted in adding certain line searches which made the algorithm quite reliable
but also expensive in complicated cases. In the following this algorithm will be denoted by RF-algo-
rithm for brevity of notation. Later investigations by a number of authors have studied almost the full
spectrum of available algorithms in mathematical programing (see especially Liu/Der Kiureghian, 1986,
for a thorough discussion) for their applicability in structural reliability calculations such as gradient
free algorithms, penalty methods, stochastic programing methods but also the so-called sequential
quadratic programming method (SQP-method) which theoretically is now considered as the most
efficient method (see, for example, Gill, et aI., 1981: Hock/Schittkowski, 1983; Arora, 1989). Based on
their studies Liu/Der Kiureghian (1986) proposed a certain merit function for the line searches in the
RF-algorithm and could demonstrate both efficiency and reliability of the impoved algorithm in a
number of examples. This modification will be denoted by LDK-algorithm. The RF-algorithm and the
LDK-algorithm have originally been designed only for single constraint problems but generalizations to
multi-constraint problems have recently been proposed independently by Abdo (1989) and
S¢rensen/Thoft-Christensen (1989). A first general conclusion from a variety of related and in part
unpublished studies is that it is worthwhile to adjust general purpose algorithms to the specific
settings in reliability computations. A second conclusion is that relatively small changes in the
algorithms can improve substantially their behavior. The third conclusion from these studies is that the
gradient-based algorithms in their present form are only slightly less efficient than the SQP-algo-
rithms in not too large dimensions and for well-behaved problems. The RF-algorithm and, to a lesser
degree, the LDK-algorithm are clearly inferior to the latter for highly curved constraint functions under
otherwise similar conditions due to the better convergence behavior (fewer iterations) of the
SQP-algorithms. This is especially true when very expensive structural state functions are involved
such as finite element structural analyses; All other types of search algorithms appear to be serious
competitors to the RF- LDK- or the SQP-algorithms only in very special cases. However, for larger
dimensions of the uncertainty vector, larger than 50, say, all implementations of the SQP-algorithm
available to us became less efficient and less reliable than gradient-based algorithms. This is easily
explained by the fact that those algorithms use numerical updates of certain Hessian matrices which in
part are just the reason for their efficiency. For larger dimensions the update must remain crude when
the number of iterations is significantly smaller than the dimension of the Hessian. Hence, the
theoretical superiority of the SQP-algorithm above gradient-based algorithms turns out to be not
significant in practical applications. But more important, SQP-algorithms failed to converge reliably in
higher dimensions for reasons to be explained later in detail. Also, storage and CPU time requirements
became rather high. It may therefore be asked whether it is possible to overcome the shortcomings of
the SQP-algorithms in higher dimensions while retaining its otherwise favorable properties.
In this paper the SQP-algorithm will briefly be reviewed with special reference to possible problems.
The SQP-algorithm is then specialized for reliability calculations in the standard space. Certain
simplifications and modifications are introduced resulting in a new multi-constraint algorithm. It can
also be used in time-variant problems with little modification. Results of numerical tests are reported.
3
min f( u) (2)
L
m
L(u) = f(u) + /\j gj(u) (3)
j =1
where Aj are the so---called Lagrange multipliers. The Kuhn-Tucker optimality conditions for an
optimal point u· are
L Aj Vgj(u') = 0
t
t is the number of active inequality constraints and V is the gradient operator. The inclusion of equality
constraints is easily done by considering them always as active constraints. If n is the number of
variables and the active constraints are known an optimal point u· can be found by solving the system
of nonlinear equations (4) for n +t unknowns, namely for n components of the solution point u· and t
Lagrange multipliers A'.
An algorithm based on the solution of this system of equations by a direct method such as the Newton
method is extremely inefficient. It also must be noticed that the set of active constraints in the
solution point is not known in advance. The Sequential Quadratic Programing Method is generally
considered as most efficient for a solution. This method replaces the original problem by a sequence of
quadratic programing problems which are exactly solvable and which approximate the original one.
This is done by approximating the Lagrangian function by its second order Taylor expansion in an
initial point uo .
where
4
L Aj gj(u o)
m
L Aj Vgy
m
L Aj V2gy
m
V2lo = V2fo + (9)
j;J
and V2fo represents the Hessian of the function f in the point uo. For the formulation of the optimality
conditions the constraint functions are approximated by their first order Taylor expansion. The
Kuhn-Tucker conditions are then written for the set of active constraints in the solution point of the
quadratic programing problem. With a so-called active set strategy (see Gill et al. (1981)) it is
possible to determine which constraints are to be included in the formulation in each quadratic
problem of the sequence. The optimality conditions for any iteration point k of the sequence of
quadratic expansions are
(10)
j = 1.2 ... t (11)
k =
[ Vlk Gk] [LlU ] [-Vfk]
GkT 0 Ak -fk (12)
with
Llu k = (uk+! - uk) (13)
Gk = [Vgf .... Vgr ....... Vgt]nxt (14)
fkT = [g!(uk) .... gj(uk) ....... gt(uk)hxt (15)
The exa~t calculation of the second order derivatives for the Hessian matrix in Eq. (12) is generally to
expensive and can not be efficiently implemented for a general case. Therefore. the gradient
information obtained in each point during iteration is used to build up an approximation of this matrix
using one of the known update formulas (see Gill et al. (1981)). In the first iteration a unit matrix is
used instead of the true Hessian to solve the system of equations Eq. (12). The solution of this
quadratic problem with linear constraints defines a direction (Llu k) in which a line search is performed.
This one dimensional search is performed to obtain an optimal decrease of the objective and the
constraint functions in that direction. The new iteration point is defined by
5
m
lI1(u k + vkLlu k) = f(u k + vkLlu k) + ~>\j gj(U k + /,kLlu k) + ~ rr gr(u k + vkLlu k)] (17)
j =1
with
(18)
and rO = 1AO I. This augmented Lagrangian function was proposed by Schittkowski (1981) who proved
global convergence of the algorithm with this definition of the descent function. The process stops
when the optimality conditions of the original problem are satisfied. In general. vk needs to be
determined only approximately. e.g. by quadratic or cubic interpolation.
The most time consuming part in this algorithm is the updating of the Hessian matrix or its triangular
decomposition and the solution of the system of equations. In each iteration 10n 2 + O(n) arithmetic
operations are required. The update formulae determine the exact Hessian of quadratic functions after
n updates. A fair approximation of the Hessian of non-quadratic functions are also obtained with
about n updates of the matrix. This means that the approximation used in the few (say ten) iterations
to reach convergence cannot be very good when the problem has large number of variables. But the
rounding errors during the updating process in large problems can make the approximate Hessian to
become singular. Close to singularity the search direction can be significantly distorted. In this case the
algorithm has to restart the iteration with a unit Hessian matrix in the point where singularity
occurred.
It is possible to modify the algorithm for reliability applications. As mentioned the objective function is
a simple quadratic function which can be exactly expressed by its Taylor expansion in any point uk;
f( u) = II U 112 = f( uk) + VfkT Llu + ~ Llu T V2fk Llu = II ukll 2 + 2 ukT Llu + Llu T Llu (19)
This expansion and a first order Taylor approximation of the constraints is used to write the
Lagrangian function as
6
m
L(u,>.) = Ilu kl1 2 + 2 ukT ~u + ~uT ~u+ L, Aj {gj(u k) + Vgr T ~u} (20)
j =!
For the optimality conditions the constraints are also approximated by a linear expansion and one
obtains the following Kuhn-Tucker conditions:
t
where t is the number of active constraints. This system of equations can be written in matrix form
using the definitions in Eq. (14) and (15).
With this formulation 3 constant approximation of the true Hessian matrix is obtained. Only the
contribution of the objective function to the Hessian is considered. The solution of this system is
obtained by a Gaussian decomposition of the matrix in Eq. (22).
This closed-form solution of the system of equations needs only the numerical decomposition of a
small matrix GI Gk with dimension t which contains the scalar product of the gradients of the
constraints as elements in each iteration. The result can be written in a simpler way by noting that the
gradient matrix can be given as
(25)
with
Ak = [at,···ar,····.·atl (26)
ak =
J
mb-rr VgkJ
II,gJII
(27)
With this notation the solution for the new point is written as
(28)
7
where
(29)
can be interpreted as the correlation matrix of the constraints in the point k. This matrix is always
positive definite. The use of the diagonal unit matrix as the Hessian matrix of the lagrangian is
justified because of the special form of the objective function. The mathematical proofs of global
convergence of the algorithm are also valid for a unit matrix when the augmented lagrangian function
is used as a descent function (HockjSchittkowski (1983». The new iteration point is then found by
calculating the optimal step length tlk with the descent function in Eq. (17).
(30)
This is the RF-algorithm in its original form but here it is combined with the step length criterion by
HockjSchittkowski (1983). This new algorithm can be shown to be superior to the RF-algorithm in
any dimension and superior to SQP-algorithms for larger dimensions. Due to the specific choice of the
descent function it is also slightly more reliable than the lDK-algorithm but otherwise similar. For
convenience. it will be called ARF-algorithm in the following. The algorithm further allows an
important generalization for parameter-dependent reliability problems. Parameters occur. for example.
in reliability problems involving non-homogeneous random processes. fields or state functions. Here we
consider only the single constraint case. The extension to the multi--constraint case is straightforward.
Assume that the state function is an explicit or implicit function of a deterministic variable vector T
which. for example. represents time and/or space coordinates. The optimization problem is formally
rewritten as
g(u) ~ 0 (33)
with
u= [¥] (34)
L(ii.A) == IIu kl1 2 + 2 ukT ~u + ~uT ~u+ A {g(u k) + VgkT ~ii} + ~ ~TT ~T (35)
are written using a first order expansion of the constraint and by approximating the contribution of the
second derivatives of the constraint with respect to the vector T (including the Lagrange multiplier) by
a unit matrix of the same dimension (nd of the vector T. The system of equations in matrix form then
is
21 Vgk -2 uk
[ o
VgkT 1 0 -g(ii) (37)
where
In a la~ge number of examples of varying problem dimension and partly large curvatures the earlier
findings have been confirmed i.e. (see Abdo (1989)):
- The RF-algorithm performs well in any dimension when the curvatures of the constraints are
small. The number of iterations then is only slightly larger than for the other. theoretically
superior algorithms. The storage and CPU time requirements are small. However. when the
curvatures become larger the algorithm can require a large number of iterations.
9
In smaller dimensions but with more curved constraint functions, SQP-algorithms (here we used
the algorithm NLPQL by Schittkowski (1984) for numerical comparisons) required the least
number of iterations but more and more storage and CPU time with increasing problem
dimension. For highly curved constraint functions SQP-algorithms are also the most reliable in
small and moderate dimensions. The mentioned algorithm, however, failed to reach convergence
at a problem dimension of around n = 50 and larger due to singularity of the updated Hessians
and continues to do so even with carefully selected starting vectors. Limited experience with
other SQP-implementations suggests that this is a general feature of SQP-algorithms. The
authors believe that, in principle, the design of a device should be possible which avoids this
behavior. It would most likely be associated with some loss of efficiency.
The new ARF-algorithm was found to work reliably in any dimension and almost as efficient as
the pure SQP-algorithms from which it uses essential elements. It appears that a suitable step
length algorithm is most important for efficiency and reliability. Especially in problems with
larger curvatures the new algorithm can be by orders of magnitude more efficient than the
original RF-algorithm. The storage requirements and CPU time are much smaller than with the
SQP-algorithms and by far and large the same as for the RF-algorithm. Numerical comparisons
of the RF-algorithm with the LDK-algorithm showed that only if the parameter rk is suitably
chosen reliability and efficiency are comparable for the two algorithms. Hence it is essentially
eq. (18) which makes the new algorithm more stable and efficient under all circumstances. For
the special state function
L
n
g( u) = 1.5 exp[0.10 Ui] - n exp[0.10 Ul] (41)
figure 1 shows the required storage in appropriate units versus the number of variables in
eq. (41). The dramatic increase in storage of the NLPQL-algorithm with problem dimension
limits its use. For example, under the DOS-operating system up to 85 variables can be handled
while the ARF-algorithm can handle problems up to 2000 variables. In figure 2 the time per
iteration is plotte~ versus the number of variables in eq. (41) for the two algorithms indicating
that the CPU time is comparable only for very small dimensions but increases rapidly when the
number of variables is larger than about 20. In these comparisons one has to keep in mind that
the state function used in the example is very simple. A SQP-algorithm may still be preferable in
lower dimensions when calls of the state function and its derivatives require much computing
time. Finally, it is worth mentioning that the length of the program code of the new algorithm is
less than half of most of the SQP-algorithms.
These findings, of course, refer only to the pre-defined application to reliability calculations in the
standard space. Furthermore, it is important to keep in mind that in actual calculations the proper
scaling of function values and the particular schemes for taking the derivatives of the constraints are of
vital importance. Also, the numerical constants for the convergence criteria and the other
10
numerical operations must be consistent and adjustable to the specific problem at hand. A
theoretically optimal algorithm can behave much worse than a simple gradient algorithm if
implemented inadequately in this respect.
From some preliminary studies one can conclude that efficient and reliable original space algorithms
should be designed on similar lines because pure SQP-algorithms show the same drawbacks in higher
dimensions as in the standard space. In this case the objective function simply is the joint density of X
in Eq. (1) or better In fx(x). Again. the numerical update of the Hessian of the Lagrangian must be
avoided. With some additional computational effort one can compute analytically the Hessian of the
objective function for most stochastic models and it appears that with only little loss of efficiency one
can concentrate on the diagonal. This then leads to a scheme very similar to the ARF-algorithm.
However. as mentioned before some more work is needed to reach final conclusions.
Summary
The search algorithms for the critical point for standard normal probability integrals based only on
gradients and simple step length strategies are theoretically and practically inferior to sequential
quadratic programing algorithms in general applications. The latter. however. show instabilities in
higher problem dimensions where they also can consume much CPU time and storage. By certain
modifications it is possible to construct a new algorithm which has almost the same convergence
properties as SQP-algorithms in smaller dimensions but does not share its shortcomings in large
dimensional problems. A new algorithm is proposed which essentially is the RF-algorithm but with an
improved step length procedure. It has been generalized to multiple constraints and to include
optimization parameters not explicitly occurring in the objective function thus facilitating the
numerical solution of time-variant reliability problems.
References:
FieBler. B.. Neumann. H.-J .. and Rackwitz. R.. (1979). Quadratic Limit States in Structural
Reliability. Journal of Enginnering Mechanics. ASCE. Vol. 105. EM4. 661--676.
Gill. P.E .. Murray. W .• Wright. M.H .. (1981). Practical Optimization. Academic Press.
London.
Hasofer. A.M .. and Lind. N.C.. (1974). An Exact and Invariant First Order Reliability
Format. Journal of Enginnerillg Mechanics. ASCE. Vol. 100. No. EM1. 111-121.
Hohenbichler. M .. Gollwitzer. S.. Kruse. W .. and Rackwitz. R.. (1987). New Light on First-
and Second-Order Reliability Methods. Structural Safety. Vol. 4. pp. 267-284.
Liu. P.-L.. Der Kiureghian. A .. Optimization Algorithms for Structural Reliability Analysis.
Rep. UCB/SESM-86/09. Dept. Civ. Eng .. Univ. of California. Berkeley. 1986
Rackwitz. R.. Fiessler. B .. Structural Reliability under Combined Random Load Sequences.
Computers E:1 Structures. Vol. 9. 1978. pp. 484-494
Schittkowski. K .. (1981). The Nonlinear Programming Method of Wilson. Han and Powell
with an Augmented Lagrangian Type Line Search Function. Numerische Mathematik. Vol.
38. Springer-Verlag. 83-127.
o
~
N
§
N
g
o
'" NLPOL
§
o
§~
: I"""
o i' ii""
20
I"""""",,:
40 60
""~~~:, 80
II """" "';"" '"'" "";11 ,,: "'"
100 120 140 160
dimension
..,
0
'"
N
0
N
U
<V
~
c: NLPOL
.g '"
:!
:~
a; 0
0-
<V
E
''-:;
'"
dimension
Abstract A method for upper bound approximations to the reliability with respect to rigid-ideal
plastic structural collapse is presented. Usually only very few collapse mechanisms contribute signifi-
cantly to the total probability of collapse. The problem is therefore to identify these few mechanisms.
The method is illustrated for a spatial frame structure discretized into a finite number of potential
yield hinges. Each potential yield hinge is modeled individually by assigning to it any general
piecewice differentiable yield surface. The associated flow rule is assumed to be valid in all hinges.
Yield strengths and loads are random without restrictions on the choice of the joint distribution
except for operational transformability into the a standard Gaussian space. It is shown how the
problem of search for significant collapse mechanisms can be formulated as a standard constrained
non-linear minimization problem.
is convex and nonempty with probability one. Qi is the vector of internal forces
and Y i the vector of strength parameters of the ith yield condition with yield func-
tion k For each yield function the associated flow rule is assumed to be valid.
1 - PI = P(UZ E3?n[Q(R, z) E SD
where PI is the failure probability, n = dim(z), and Q(R, z) E S is the event that
f;( Q;, Y i ) ~ 0 for all i = 1, ... ,r. In general the evaluation of this probability is
not practicable. Instead the lower bound
1 - PI ~ P([Q(R, z) E SD
has been considered for some suitable fixed value of z. However, in general this
lower bound is not very close, [4], [7], and in the following example no effort is
made to evaluate it.
to the safety margin !i( Qj, Y i ). D(Yj, aj) is the dissipation, and < .,. > is the
scalar product. It is noticed that for any internal force qi outside the yield surface
!i(qi, Yi) = 0 there is an ai such that Mj(qj, y;, aj) < O. From the upper bound
theorem of plasticity theory an upper bound on the reliability can be obtained by
use of (1).
For a given piecewice continuous velocity field, represented by the strain rate
vectors ai, i = 1, ... , r, the principle of virtual work can be written
(2)
where Qi' i = 1, ... , r are the internal forces in equilibrium with the external loads
R. The vector 6 is the generalized velocity vector associated to R. The geometrical
compatibility defines 6 so that < R, 6 > represents the work rate of R due to the
imposed velocity field. It is seen from (2) that the left side is independent of
the vector of redundants z. This means that any set of admissible strain rate
vectors ai, i = 1, ... , r, which makes Ei=l < Qi(R, z), ai > independent of z is
kinematically admissible (i.e. it fulfils the boundary conditions).
A linear combination of linearly associated lower bound safety margins with
admissible strain rate vectors reads
(3)
If the strain rate vectors also satisfy the geometrical compatibility, yielding that
E 1=1 < Qi(R, z), ai > is independent of z, the strain rate vectors represent a
kinematically admissible velocity field. It is seen from the upper bound theorem
of plasticity that if (3), based on admissible strain rate vectors, are independent
of z, it is an upper bound safety margin. These results are summarized in the
16
Search for upper bound safety margins of low reliability The search for an
upper bound on the reliability is based on non-linear optimization. It is assumed
that the random variables, i.e. the external loads and the yield strengths, has
transformability into standard Gaussian variables, and that aj E Rm;, i = 1, ... , r,
where mj = dim(aj), are admissible strain rate vectors. The probability that an
upper bound safety margin is negative is computed by the FORM approximation
which for a non-convex limit state might result in an underestimation of the reli-
ability. This leads to the following minimization problem.
(3 = min(lulg(u,a)=o)
obtains a local minimum value given that XT =
(Yf, ... , Y;, RTl is transformed to the standard and
independent Gaussian variables U, and g(U, a) =
L:i=IMj(Qj, Y j , aj) is independent of z, where aT =
(af, ... , a';:)T
o{3 = ~(g(u*,o:))
OCiij 1\7g(u*, 0:)1
where u* is the solution point to min(lulg(u,o:)=o), and \7g(u*, 0:) is the gradient
of g( u, 0:) with respect to u evaluated in u*.
In general, there will be a number of solutions to the minimization problem,
solutions representing different failure modes. The different modes are obtained by
starting the minimization in different 0:1, ... , O:r points (or, since there is no scaling
in the strain rate space, in different directions). Due to the correlation among the
mechanisms, more than the global minimum (minima) is usually needed in order
to obtain a close upper bound. However, there is no guarantee that the global
minimum (minima) is found.
As described above the strain rate vectors in an upper bound safety margin
is expressed in terms of q = (E mi) - n free parameters. This means that the
strain rate space has been parameterized in the same way as in the fundamental
mechanisms approach, [13].
Using a different approach the same type of problem has been studied in [2].
The reliability with respect to plastic collapse is expressed as
1 - PI = P(g(R, Y) > 0)
where
based on the lower bound theorem. After a transformation of R and Y into the
standard and independent Gaussian variables U, a transformation which trans-
form (4) into the limit state gu(u) = 0, a FORM approximation is used to identify
one or several likely failure points
R2
~R6
--+ 4m
/' ,/
Rs X2 Rl,
X,
Sm
l,.
>\
Sm
!i(q;,y;) = 1- i~ [( Nf +
nb; NJ .)2 (Mil
MJ.)2 ( MJ.)2]
+ Mil = 0 (5)
where the summation j = 1, ... , nb; is over the number of beams in the nodal
point. Ni is the normal force, Mt and Ml are the bending moments of the
j'th beam in the i'th node with reference to the local coordinate system of the
19
j'th beam. Nf, Mtf' and MIf are the corresponding yield strengths. The local
righthand orientated coordinate systems, with coordinate ordering x, y, and z,
have the x-axes in the beam directions. For the vertical beams the y-axes are in
the x3-axis direction, whereas the directions for the horizontal beams are in the
x2-axis direction.
The yield strengths are modelled by 6 variables for each beam, where the beams
are reaching from potential yield hinge to potential yield hinge. The yield strength
variables Y are lognormally distributed, all with a coefficient of variation of Vy =
0.15. The means are taken as E(Nf ) = 20 MN and E(Myf) = E(Mzf) = 0.5 MNm
for all beams. The 6 yield strength variables assigned to a beam are equicorrelated
with correlation coefficient 0.9, whereas the yield strengths for different beams are
mutually independent.
The external loads, R, are normally distributed and independent of the yield
strengths. The mean values are
E(Rf = (0.12,0.12,10,0.25,0.1,10) MN
vk = (0.625,0.625,0.05,0.5,0.5,0.05)
The coefficients of correlation are zero except for P(Rb R 2 ) = P(R4' R 5 ) = 1. By
this the dimension of the u-space is 52.
The dimension of the strain rate space is 48, but in order to assure that the
strain rate vectors are kinematically admissible, the strain rate space is described
by 24 parameters.
By starting the minimization in 40 simulated strain rate points 10 different
mechanisms are identified with an average solution time of 88 sec. on an Apollo
10000. The identified mechanisms have the geometrical reliability indices
(31 = 2.896 (32= 2.941 (33 = 2.979 (34 = 4.478 (35 = 4.455
(36 = 4.469 (37 = 4.478 (38 = 4.502 (39 = 4.556 (310 = 4.628
of which only the three first contribute significantly to the collapse probability.
On the basis of the identified 10 mechanisms the reliability for the system can
approximately be bounded by
B
,----,
,
I
--
,
I
--
I
{3 A B C D
2.896 21 3.7 21 1.0
2.941 14 3.2 26 1.0
2.979 8.7 2.9 27 1.0
References
[1] Bjerager,P., "Reliability Analysis of Structural Systems", Department of Structural Engineering,
Technical University of Denmark, R183, 1984.
[2] Bjerager,P., "Plastic systems reliability by LP and FORM", Computers and Structures, vol 31,
no 2,1989.
[3] Dennis,J.E., and Schnabel,R.B., "Numerical Methods for Unconstrained Optimization and Non-
linear Equations", Prentice Hall Series in Computational Mathematics, 1983.
21
[4] Ditlevsen,O., and Bjerager,P., "Reliability of highly redundant plastic structures", Journal of
Engineering Mechanics, ASCE, 110(5), 1984.
[5] Ditlevsen,O., and Bjerager,P., "Plastic Reliability Analysis by Directional Simulation", Journal
of Engineering Mechanics, ASCE, 115(6), 1989.
[6] Ditlevsen,O., "Probabilistic statics of discretized ideal plastic frames", Journal of Engineering
Mechanics, ASCE, 114(12), 1988.
[7] Ditlevsen,O. and Arnbjerg-Nielsen,T., "Reliability analysis of stochastic rigid-ideal plastic wall
by finite elements" , IFIP, London, 1988.
[8] Liu,P.-L., and Kiureghian,A.D., "Optimization algorithms for Structural Reliability", in Pro-
ceedings, Joint ASME/SES Applied Mechanics and Engineering Sciences Conference, pp. 185-
196, Berkeley, CA., June 1988.
[9] Madsen,H.O, Krenk,S., and Lind,N.C., "Methods of Structural Safety", Prentice-Hall, 1986.
[10] Madsen,K. and Tingleff,O., "Robust subroutines for non-linear optimization", Report NI 86-01,
Institute for Nurnrical Analysis, Tech. University of Denmark, 1986.
[11] Schittkowski,K., "NLPQL: A Fortran Subroutine solving Constrained Non-linear Programming
Problems", Annals of Operations Research, 1986.
[12] Sigurdsson,G., "Some aspects of reliability of offshore structures", Paper no. 55, Institute of
Building Technology and Structural Engineering, University of Aalborg, Denmark, 1989.
[13] Thoft-Christensen,P., and Murotsu,Y., "Application of Structural Systems Reliability Theory",
Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1986.
OPTIMAL ALLOCATION OF AVAILABLE RESOURCES
TO IMPROVE THE RELIABILITY OF BUILDING SYSTEMS
SUMMARY
This paper deals with structural optimization under limited resources. As a first relevant
example, the problem of reduction of seismic risk in a built environment is tackled. Following
other recent researches [1-3], three alternative "types" of retrofitting (upgrading intervention) of
known cost per unit volume are considered; it is also assumed that the "vulnerability" of each
building, the "losses" under an earthquake of given magnitude and their possible reduction
thanks to an upgrading intervention are known. Such "losses" can be either (i) the direct and/or
indirect "economical" costs of construction damage, or (ii) the non-monetary losses related to
the number of "victims", i.e. of persons physically affected by the ruin of a building. Use is made
of "dynamic programming" in order to allocate the total available resources among the buildings,
in such a way that the expected cost reduction is maximized; in these calculations, "economical"
and non-monetary losses are considered separately; the possibility of taking both into account
is also discussed. Numerical examples are presented.
1. INTRODUCTION
It is well known that, when account is taken of random uncertainties and of the consequent
probabilities of failure, the total expected cost of a construction is minimum (and
correspondingly the expected ·utility· is maximum) for a certain set of values of the "design
24
variables". It is a typical feature, confirmed by all numerical investigations so far performed [4].
that from the optimal point the total cost increases (and conversely, the expected utility
decreases) very rapidly for under-designed structures, and rather slowly for over-designed
structures: see Fig.1 [4-5] for a simple example with one design variable (the cross-section of
the columns of a one-storey steel frame). Taking account also of the "intangibles" (Le. non-
monetary costs) that cannot be included in the "economical" optimum, it appears therefore
obvious that a correctly "optimized" design will tend to increase the "initial cost", perhaps even
beyond the "apparent optimal" point.
On the other hand, there may be instances in which the available resources are limited,
and therefore the total initial cost of the constructions has rigid bounds that may keep the
designs below the optimal points. This may happen e.g. when for social reasons it is necessary
to build a certain volume of housing, but a given amount of money has been already allotted to
the building agency, or when a private builder wants to optimize the returns of a fixed quantity of
money.
/8°;10000
a) 0.0 f-...I.-.....L.r~~~--,...---,-_-J.:::.-...;.:....:~
180 200 220 240 260 280 300
Column sectlo.n (HEAl
b)
Fig.1: Simple example of structural design for maximum expected utility: (a) example frame;
(b) plots of expected utility vs column section, for several values of the relevant parameters [4-5].
An even more relevant and realistic example of such a situation is the problem of risk
reduction for a set of existing buildings, that have an unacceptable probability of failure because
of their age and deterioration or of their insufficient design: typically, this is the problem posed
by "old" housing in areas that have been recognized as seismically active, because - as well
known to the technicians and planners that tackle these problems - the money necessary for an
exhaustive upgrading is seldom available, but still something must be made in order to improve
the conditions of the buildings.
Design optimization under limited resources appears therefore a problem of practical, as
well as academic, interest; however, to these Authors' knowledge, it does not seem to have
been treated previously in structural engineering, before their recent researches [1-3] in which a
procedure for such optimization problems has been developed and applied to the specific case
25
of the reduction of seismic risk of existing buildings. This procedure will be again illustrated and
further developed in the present paper, but these Authors feel that it could be easily extended to
analogous situations.
Consider a set of existing buildings and assume that their "vulnerability" for a given type of
load is known, i.e. that the (probabilistic) relationship is known between "expected damage" and
"load intensity": in the applications so far developed, only seismic load is considered and the
vulnerability is measured by a number ("vulnerability index"), in turn obtained in surveys in which
the so-called ·second level" form has been used, as described in several references (e.g.[6]). A
relationship between vulnerability index, earthquake intensity and percent "damage" of the
building must be introduced for subsequent developments: in the present researches, the set of
curves in Fig.2 has been assumed as deterministic relationships [7-8]. (It is fair to say soon that
this is one of the weakest
points in the whole procedure,
w because of the great dispersion
Ck:
L:J
~'.
w 0.90 of the data on which the curves
W ..eo of Fig.2 were based, and the
0 2) 1=8.5-9 2 3 4
w ~ 0.70 3) 1=8-8.5 consequent statistical
L:J 4) 1=7.5-8 5 uncertainties which should be
<r 0.60 5) 1=7-7.5
:L
<r somehow' taken into account in
0 0.50
order to obtain more significant
0.040
numerical results. However, it is
0.30
also true that this approximate
0.20 assumption, like the other
0.10 simplified relationships
7S.oa 100.00 125.00 150.00 175.00 200.00 225.00 -250.00 215.00
introduced as first tentatives
where necessary, does not
VULN. INDEX
invalidate the essence of the
Fig.2: Expected percent damage versus vulnerability procedure illustrated here,
index, for several MSK intensities (from D.Benedetti, G.M. because different, more
Benzoru, Int. Conf. on "Reconstruction, Restoration and sophisticated, relationships
Urban Planning of Towns and Regions ...", Skopje, 1985).
could be easily introduced
whenever available).
Once the (expected) damage of a building has been calculated, the "losses' must be
evaluated: such "losses" should include both the direct and/or indirect "economical" costs of
construction damage, and the relevant "intangible" (non-monetary) losses, in particular those
related to the number of "victims', i.e. of persons physically endangered by the ruin of a
building. Denoting by d the percent damage, the approximate relations, shown in Fig.3 have
been used so far [7]:
26
c/Cj v / np
b)
a)
0.8
0.4
0.1
Fig.3: a) Monetary losses c vs. damage d (Ci: initial cost); b) victims v (number of endangered
persons) vs. damage d (np: number of persons present in the building at the time of the earthquake).
Note that, in the numerical examples, the number np of present persons has been
assumed equal to 0.017 per cubic meter: it has been left to further researches the possibility of
distinguishing between buildings of different uses. Making use of the relationships in Figs. 2 and
3, the damages and the losses produced in the relevant buildings by the occurrence of an
earthquake of given local intenSity can be calculated. This "expected earthquake" should be
known in probabilistic terms: in the examples that follow, either the intensity of the quake or its
probability mass function (p.m.f.)[4] is assumed known. Note that, like in all our previous
researches [1-3, 7-9], "monetary" and "intangible" costs, being incommensurable, are kept
separate from each other; note also that other "intangible losses", like those related to the
historical and artistic values of the buildings, are not (yet) included in the treatment.
In order to formulate a rational "intervention strategy", it is necessarY to know also how
much each possible upgrading intervention costs, and by how much it reduces the vulnerability,
and consequently damages and losses. Therefore, the possible interventions and their effects
must be modelled according to some appropriate simple "scale". In this paper, in analogy to
previous ones, the following three types, respectively denoted as "light" (L), "medium" (M) and
"heavy· (H), are considered:
i) the type L intervention consists in connecting horizontally (by "chains· or other means)
the vertical structures;
ii) in the type M intervention, besides connecting the verticai structures as in L, the
horizontal diaphragms are strengthened; .
iii) the type H intervention includes, besides the reinforcements already described under L
and M, an increment of the overall strength against horizontal actions, e.g. by the construction
of new vertical structures or the strengthening if the existing ones.
Recent Italian experiences allow to attribute fairly constant values to the cost per unit
volume of the above types of intervention performed on "old" masonry buildings (like the ones
examined in our examples), namely the following values that are used in this paper also: type L:
20000 Uras (approx. 13.5 ECUs) per cubic meter; type M: 40000 Uras (approx. 27 ECUs) per
cubic meter; type H: 80000 Uras (approx. 54 ECUs) per cubic meter.
By assumption, each type of intervention brings into the best (A) class the corresponding
item(s) of the vulnerability survey form and consequently reduces the vulnerability index of the
building.
27
In previous studies by these and other Authors. the objective of the interventions had been
set first. then the costs necessary to reach that objective evaluated: the result was then
regarded as an "indispensable" amount of money. For instance. in [7] it was stated that no
building in the relevant area should be damaged more than 40%. and the necessary
interventions were recognized for several intensities of "design earthquake"; then. introducing
the quoted unit costs of each type of intervention. the expenditures were calculated.
Thus. for either "monetary" or "intangible" losses. the two following basic problems can be
set up:
i) optimal allocation of the available resources among n buildings (or n groups of similar
buildings) located in a site in which a uniform "design earthquake intensity" is prescribed. This
problem can be formally presented as follows:
where Xmax denote the maximum total expenditure allowed. and the suffix k can be either c
orv.
This case. in which the design earthquake is conventionally assumed· to occur with
probability one. allows to "distribute" optimally the resources but does not allow to evaluate the
"expected utility" of the upgrading strategy.
28
ii) optimal allocation of the available resources among n buildings (or n groups of similar
buildings) located in sites in which the probability mass functions of the significant earthquake
intensities are known: such p.m.f.'s could be the same for all buildings, but in the most general
case a probability of occurrence Pil will be defined for each intensity j and building L Thus, the
following optimization problem can be formulated:
maximize the expected total return fek(X1,X2, ....... Xn) = LI LI PII gik(Xi) (4)
subject to (5)
o= 1,2, ...... n)
and Xl ~ 0 (6)
where again Xmax denote the maximum total expenditure allowed, and the suffix k can be
either c or v.
The non-linearity and discontinuity of the relevant relationships do not allow the use of
·standard" (differential) maximization procedures. It is therefore convenient to discretize the
whole problem, including the resources that can be allocated. However, even with a
comparatively small number of buildings and of available resource units, the number of possible
alternative allocations is very large: for example, in the following numerical example (120
resource units and 30 buildings) it is of the order 1030 •
It is therefore necessary to use an appropriate algorithm. Noting that the objective function
fk or fek is the sum of as many quantities as the buildings, each in turn a function of only the
resource assigned to the i-th building, the optimization results in a "multi-stage decisional
process": such a problem can be solved by a specific optimization technique involving a
comparatively small number of operations, i.e. "dynamic programming", already well known and
widely used in other branches of research [10].
4. NUMERICAL EXAMPLE
The allocation of 120 units of resources (of 1 x 107 Uras, Le. approximatively 7000 ECUs,
each) among 30 buildings is presented now. The relevant data on the buildings and the returns
of the three types of interventions for an earthquake of MSK intensity 8.5-9 are shown in Table I.
Similar Tables can be constructed for other intensities.
Through the dynamic programming algorithm, that calculates only a limited number of
alternative solutions, the optimum points can be rapidly reached for both the "economical" and
the "intangible" return (fc and fv) and for each relevant intensity. The results obtained for fc are
diagrammatically shown in Fig.4: for each building, the number of resources units for each type
of interventions is shown on the vertical axis, while the "optimal" solutions are indicated by the
connecting lines.
29
Ris.
- - 1=7-7.5
17
......... I = 7.5-8
10
..
1=8-8.5
f!l
15
- - - 1=8.5-9
I I
13 I I
12 I I
II I I
I I
10
I I
I I
I I
I I
I I
I I
I I
I I
I I
I
I I
I 2 a , 5 • 7 8 • 10 II 12 13 .. 15 18 17 18 II 20 21 22 23 24 25 20 27 28 29 30 Bd.
Fig.4: Allocations of 120 resource units that maximize the decrease in the total "economical"
losses over 30 buildings.
30
As a second example, the expected returns for the same set of buildings have been
optimized. To this aim, neglecting structural damage caused by MSK intensities smaller than 7,
the larger intensities have been grouped into the following four classes (ct. eq.(4)):
j: 2 3 4
MSK Intensity: 7-7.5 7.5-8 8-8.5 ~8.5
The 30 buildings have been assumed to be located in three areas of the Central Italian
Region of Umbria, known to have different seismicities that can be defined by the following
yearly probabilities of occurrence:
Buildings Site (Town) Pli P2i P3i P4i
.&
Ris. fc
14
fv
13
12
11
10
,0.,
I
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 18 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Sd,
Fig.5: Allocations of 120 resource units that maximize the decrease in either the expected total
"economical" loss or the expected total "intangible" loss over 30 buildings.
In some instances, also sub-optimal solutions must be considered. For instance, one
might decide to assign the maximum useful amount of resources to some buildings of particular
importance (e.g. fire stations), and optimize the allocation of the remaining available resources
among the remaining buildings.
31
An analogous problem arises when trying to pursue at the same time both the
"economical" and the "intangible" criteria, which by their very nature cannot be combined
mathematically and, as shown in the presented examples, can be applied independently from
each other to ob tain two different "optimal solutions".
Instead, one could prefer a solution that is only "nearly optimal" from the economical
viewpoint but at the same time reduces significantly the total expected number of endangered
persons. Following a similar idea presented in an earlier paper [9], as a first tentative it has been
tried to start from the design point that corresponds to the optimal solution from the purely
"economical" viewpoint, and determine "paths" along which the decrease of the economical
return is smallest in comparison with the increase in "intangible' return (the reduction in v).
The same procedure can be followed assuming the intangible optimal solution as a starting
point, and determining the path which leads to the smallest "intangible" return, compared with
the increase in "economical" return. The choice between the two procedures depends on the
relative importance of "economical" and "intagible" returns, which remains to be decided upon
by the user.These paths have been determined for some example cases by means of an
heuristic algorithm; though it has not been possible to arrive always at a complete definition of
the minimal descent path regarding both optimal values.
An alternative approach is that of the so-called "Pareto" [11] optimal solutions, the
definition of which can be summarised in our case as follows: a solution (to whicD corresponds
a certain couple of fcP,fvP values) is optimal according to Pareto if no other solutions exist which
present values simultaneously greater than both fc,fv. The optimal pOints (which are in general
more than one) belong to the boundery line between the maximum points of fc and fvon an fc-fv
plane (see fig. 6). Further research is being made in order to individuate a more efficient
algorithm for determining these optimal points .
maxfc - _ _ .
X·~: ~ .
. .. .::.. . ~ ............. .
: . ... ~: . . '" :- \
~ .: .'. ::},:: ~
.. . ....: " . . max tv
Fig. 6 - "Pareto" optimal solutions: points belong to the boundery line between the max fc and
max fv on the fe-fv plane. In this case all possible solutions have been calculated and plotted.
32
ACKNOWLEDGEMENTS
These researches have been supported by grants from the (Italian) Ministry of University
and Research to the University of Rome "La Sapienza" and by a C.N.R.-GNDT Research
Contract with the University of Florence.
REFERENCES
[1] G.Augusti, A. Borri, E.Speranzini: Seismic vulnerability data and optimum allocation of
resources for risk reduction; Proc. 5.th Internat. Cont. on Structural Safety and Reliability
(ICOSSAR'89), S.FranCisco, August 1989, pp. 1,645-652.
[2] G.Augusti, A.Borri, E.Speranzini: Allocazione ottimale delle risorse per gli interventi di
riduzione del rischio sismico; 4.th Italian Nat. Conf. Erthquake Engineering, Milano, 1989.
[3] G.Augusti, A.Borri, ESperanzini: Optimum allocation of resources for seismic risk
reduction: economical return and "intangible" quantities; 9.th European Cont. on Earthquake
Engineering, Moscow, September 1990.
[5] G.Augusti, F.Casciati: Multiple modes of structural failure: probabilities and expected
utilities; Proc. 2.d Internat. Conf. on Structural Safety and Reliability (ICOSSAR'?7), Munich;
Werner-Verlag, Dusseldorf, 1977, pp.39-57.
[7] G.Augusti, A.Borri: Generazione automatica di mappe del rischio sismico e dei possibili
interventi; Univ. Firenze, Civil Engineering Dept., Structures Division, PubI.No.3/86.
[8] G.Augusti, A.Borri, T.Crespeliani: Dynamic maps for seismic risk reduction and effects
of soil conditions; Univ. Firenze, Civil Engineering Dept., Geotechnical Division, PUbI.No.5/88.
ABSTRACT
The assessment of structural life expectancy for marine vessels is a relatively
complex task. This is due to uncertainties in the various parameters as well as
incomplete knowledge of their characteristics. In this a paper, a methodology for
structural life expectancy is suggested. The methodology is based on structural
reliability, theory of extremes, a plate wastage model, and system analysis. Example
applications using marine structures are discussed.
z (1)
in which the Xi'S, i -1, ... , n, are the basic random variables of loading and
strength, with g(.) being the functional relationship for a particular potential
failure mode, and Z represents the "performance." The case of Z - 0 gives the
failure surface, and then the function is defined as the limit state equation. The
case where Z < 0, represents failure, and Z > 0 represents survival. The probability
of failure can then be expressed by
(2)
where fX is the joint density function of X= (Xl' X2, ... , Xn ), and the integration is
performed over the region where Z < O. Because each of the basic random variables
has a unique distribution and they interact, this expression cannot be solved in a
closed form. The method of Monte Carlo simulation with conditional expectation and
antithetic variates variance reduction techniques were used to determine the
probability of failure according to equation (2) [Ayyub and Haldar 1984, and White
and Ayyub 1985].
The data from these strain measurements were provided in the form of a plot against
time; each plot of data being 10 seconds long, and approximately 30 plots in each of
the 8 wave-height/vessel-speed combination (cell). The maximum strains in each 10
second interval were compiled into a histogram to determine the type of probability
distribution, and its statistical characteristics. The mean maximum strain was
converted to a mean "maximum dynamic pressure." The "design dynamic pressure" was
then calculated for the Island class, and the ratio of these two pressures
determined. The maximum dynamic pressure for other vessel types is determined by
calculating the design dynamic pressure and adjusting it by the same ratio.
PLATE THICKNESS
According to current practices of the U.S. Coast Guard, once every two years the
outside surface of each boat's hull is sandblasted in order to remove bio-fouling,
and to prepare the surface to receive the new coating system (anti-fouling paint).
This process together with some expected pitting due to galvanic corrosion, results
in a reduction in the plate thickness on the order of about one to three mils
(thousandths of an inch) per year.
As described in the next section, the performance function for plate deformation
was evaluated at the extreme value type CDF of the dynamic wave pressure loading for
each of the fifteen time periods of 2 years, 4 years, 6 years, etc., up to 30 years.
The result of those calculations is the probability in each period, of the strength
being exceeded by an extreme loading at some time during that period. The greater
the length of the period, the greater the probability of a higher loading. In other
words, an extreme load is evaluated in a time period T, where T can be any value from
zero to the design life of the boat. This extreme load can occur at any time, t,
within the time period T. Therefore the wastage must be characterized over the whole
period T, not at some discrete place such as the end or the middle.
101t - t Rm (3)
and
(4)
in which Iol t is the mean value of wastage at time t, and COV(lol t ) is the coefficient of
variation of the wastage. The mean value of wastage increases linearly with time,
but the uncertainty, i.e., COV, increases with time also.
where ~ - mean wastage rate in mils/ year, and T - the period in years. The COV of
wastage was given by
0.41305 [COV(R)] (0.2864) T(0.29933) (6)
(11)
and
SD(~) (12)
The constants ~ and ~ have the values of 3.14 and 0.577, respectively.
38
The calculation was performed on data that was simulated from the
characteristics of the random variables using the antithetic variates variance
reduction technique. Two thousand cycles of simulation were used for each of the
fifteen time periods considered, and the calculation performed once for each of the
two antithetic variates, in each cycle. The average of the results over all the
cycles in each time period gives the probability of failure in plate deformation, of
some one plate, according to the deformation ratio criterion, in each time period.
The criteria for failure in plate deformation (i.e. end of structural life) had
been selected as 6 out of 28 plates meeting the deformation ratio criterion, as
described previously. The binomial distribution, given by
N n!
P fn/N - I P k (l-P ) (n-k)
fp fp
(13)
k=n k! (n-k)!
provides the probability of at least n events out of N trials, Pm/ N, given the
probability of one event, Pfp' and based on the assumption that the events of the N
trials are independent. In this case, the "event" was the failure of some one plate,
and the N "trials" were the 28 possible failures. The failure events are actually
not completely independent, however the degree of dependence is small, and could be
assumed negligible. The resulting probability is the probability of meeting or
exceeding the failure criterion defining the end of structural life, in the time
period under consideration. The calculation was made for each of the 15 time periods
considered, and the resulting curve gave the cumulative distribution function of
structural life according to the plate deformation failure mode criteria.
39
The first step in the analysis was to characterize the loading. The original
test data was compiled into a stress range histogram, with data from each operating
cell weighted according to the proportion of time historically spent in that wave-
height/vessel-speed combination. The histogram is expressed as a matrix of 42 "bins"
that are 100 psi wide, with values ranging from 600 to 160700, with the percentage of
the total number of data in each bin.
In the S-N data, the cycles to failure are given for each detail for a range of
constant cyclic loads, therefore the histogram had to be converted into an equivalent
constant load, called the "equivalent constant amplitude stress range." This was
accomplished using the Palmgren-Miner rule which states that the cumulative energy of
constant amplitude loading is equal to the cumulative energy of variable amplitude
loading. Thi"s is given by
n m ) l/m
Sre - [ I a(k) Pc(k) (14)
k-l
where Sre is the equivalent constant amplitude stress for the detail, a(k) is the
value of the kth stress data bin, Pc(k) is the percent of all the data points found
in the kth bin, and m is the slope of the S-N line plotted on a log-log scale.
The mean value of life for the detail was then calculated from the S-N curve as
follows:
L _ 10(10g C)
1000
( _ _ __
] m
(15)
Sre Fsr
where log C is the intercept of the S-N line, Sre is the equivalent constant
amplitude stress range, m is the slope of the (log) S-N line, and Fsr is an
adjustment ratio. The histogram stresses were calculated from strains that were
40
measured in one particular ship, and at some distance from the fatigue details. The
ratio Fsr corrects the histogram stresses to what they would be in the particular
detail, and in the ship in question.
The number of loading cycles was assumed to follow a Weibull distribution. The
Weibul shape parameters w, and k were calculated from the mean value and standard
deviation of the number of cycles required for failure at the equivalent constant
amplitude stress range. Parameter k was a function of only COV, and was calculated
for each detail considered. Parameter w is given by
L
w = ------------------ (16)
1
k
]
where r is the Gamma function, and other variables are as given previously.
(17)
where Pff/IJK = the probability of failure in fatigue for the Ith simulation cycle, the
J th fatigue detail, and the Kth time period; Lcp is the number of loading cycles in
the period, and wand k are the Weibul shape parameters. The probability of failure
is again the mean of this calculation over all cycles of simulation. A stable
solution was generated in 500 cycles of simulation.
ACKNOWLEDGEMENTS
The authors would like to acknowledge the support provided for this project by
the United States Coast Guard, the United States Naval Academy, and the University of
Maryland. Parts of this paper will appear in the May 1990 issue of the Naval
Engineers Journal, American Society of Naval Engineers.
42
REFERENCES
1. Ayyub, B.M., Haldar, A., (1984), "Practical Structural Reliability Techniques,"
Journal of Structural Engineering, American Society of Civil Engineers, Vol. 110,
No.8, Paper No. 19062, August 1984, pp. 1707-1724.
2. Ayyub, B.M., G.J. White, and T.F. Bell-Wright (1989), "Reliability-Based
Comparative Life Expectancy Assessment of Patrol Boat Hull Structures," Report
Submitted to U.S. Coast Guard R&D Center, Avery Point, August 1989.
3. Hughes, O.F., (1981), "Design of Laterally Loaded Plating - Uniform Pressure
Loads," Journal of Ship Research, SNAME, Vol. 25, No.2, June, 1981, pp. 77-89.
4. Hughes, O.F., (1983), Ship Structural Design: A Rationally-Based, Computer-Aided,
Optimization Approach, John Wiley and Sons , New York, NY.
5. Munse, W.H., T.W. Wilbur, M.L. Tel1alian, K. Nicoll, and K. Wilson, (1982),
"Fatigue Characterization of Fabricated Ship Details for Design," Ship Structure
Committee Report SSC-318, 1982.
6. Purcell, E.S., Allen, S.J., and Walker, R.J., (1988), "Structural Analysis of the
U. S. Coast Guard Island Class Patrol Boat," Trans., SNAME, Vol. 96, 1988.
7. Schumacher, M., (1979), Seawater Corrosion Handbook, Noyes Data Corporation, Park
Ridge, New Jersey.
8. White, G.J., and B.M. Ayyub, (1985), "Reliability Methods for Ship Structures,"
Naval Engineers Journal, ASNE, Vol. 97, No.4, May, 1985, pp. 86-96.
PARAMETER SENSITIVITY OF FAILURE PROBABILITIES
Karl Breitung
Seminar fur angewandte Stochastik der U niversitat M unchen
Akademiestr. I/IV, D-8000 Munchen 40, FRGermany
1 Introduction
In many reliability problems the solution depends on various parameters, whose values are not known exactly.
Often for these parameters some reasonable estimates are calculated and then these estimates are treated
as if they were the true parameter values.
The essential lack of these methods is that they neglect of the gap between the given data and the
mathematical model. With the data the structure of the model and its parameters can be estimated only
with some uncertainty. The problem of calculating the structural safety in such cases is considered in [8].
It is important to have information about the influence of changes in the parameters on the result, which
is in general the failure probability. First results for this problem can be found in the Ph. D. thesis of M.
Hohenbichler [10]. A drawback of this work is that for non-normal random variables no analytic results are
obtained. Further only the influence on the beta value and not on the failure probability is considered.
The basic idea for constructing simple approximations for these sensitivity factors is the same as for
deriving estimates for the failure probability. The Laplace method is used for obtaining asymptotic approx-
imations of these quantities.
These results are also important for the optimization of structures, when an optimal set of parameters
has to be found, which minimizes the failure probability under some restrictions. Further applications are
omission sensitivity factors (see [11]).
Given is a random vector X(O) = (Xl(O), ... , Xn(O)), which depends on a parameter vector 0 = (01, ... , Ok).
For all 0 the random vector X(O) has a continuous probability density 10 (a: ) with 10 (a: ) > 0 for alle a: E JR n.
The loglikelihood function ofthe density is then 10( a:) = In(fO( a:)).
Further is given a limit state function 9 : JRn --+ JR, a: ...... g( a:, 0), which depends on the same parameter
vector O. For fixed 0 the n-dimensional space is divided into a failure domain F(O) = {a:; g(a:, 0) ::; O} and
the safe domain S(O) = {a:; g(a:, 0) > O}. The limit state surface G(O) = {a:;g(a:,O) = O} is the boundary
of the failure domain.
The failure probability P O(F) is given by
Here the failure probability depends on the parameter vector O. Changes in the parameters have an influence
on the failure probability.
44
For the approximate computation of the failure probabilities P o( F) asymptotic methods have been
developed. The case of normal random variables is treated in [3]. A generalization of this method for
non-normal random variables without the use of transformation techniques is given in [4] and [6]. For this
function P O(F) of 0 in the following asymptotic approximations for the partial derivatives with respect to
the 8;'s are derived.
The partial derivative of a function with respect to a variable (parameter) indicates the influence of this
variable on the value of the function. But since the value of the partial derivative depends on the scale
used for this variable, it may be difficult to compare these values for different variables. Therefore it can be
useful to measure the sensitivity to one variable by the partial elasticity. This measure of sensitivity is used
in mathematical economics. A definition can be found in [7], p. 195. The partial elasticity of a function
f( a:) with respect to the variable X; is given by
This quantity is independent of the scale. Approximately, (;(I(a:)) is the change of f(a:) in percents, if Xi
is changed 1 percent.
In this section a general expression for the partial derivatives of multivariate integrals depending on param-
eters with respect to these parameters is given.
First we consider the one-dimensional case. Given are two continuously differentiable functions a( r) and
b(r), and a continuous function f(x,r), which has a continuous partial derivative with respect to To
The Leibniz theorem for the differentiation of an integral gives for the partial derivative with respect to
To (see [1], p. 11,3.3.7):
a b(r) b(r)a f( X r)
-a J f(x,r)dx = J -a-'-dx + b'(r)f(b(r),r)- a'(r)f(a(r),r). (3)
r a(r) a(r) r
The first summand describes the influence of r on the function f( x, t) and the second the influence on the
boundary points.
This result can be generalized for functions of several variables. Here the boundary of the integration
domain is a surface in the n-dimensional space.
Let be given two continuously differentiable functions f : JRn x 1-+ JR, (a:, r) ...... f( a:, r) and 9 : JRn x I -+
JR, (a:, r) ...... g(a:, r) with I an open interval. Under some regularity conditions then the integral
exists for all rEI and the partial derivative of this integral with respect r is given by
=:D, (h)
For small ( > 0 in a neighborhood G,( r) = {x; minYEG(T) Iy - xl < (} of G( r) a coordinate system can be
introduced, where each y E G,( r) which is given in the form y = x +8 .n(x) (here n(x) denotes the surface
normal at x) with x E G T has the coordinates (x, 8). In this coordinate system the integral D 2 ( h) can be
written in the form
l(x,h)
D2(h) = J J J(x H· n(x),r)D(x,8)d8dsT(x). (9)
g(X,T)=O 0
Here D( x, 8) is the transformation determinant for the change of the coordinates. Due to the definition of
the coordinates D( x, 0) = 1. The function I( x, h) is defined implicitly by the equation g( x + I( x, h )n( x), r+
h) = o. The existence of this function can be proven (for sufficiently small h) using the implicit function
theorem (see [9], p. 148), if always V'yg(y,r) fo 0 for y E G.
In the limit for h --+ 0;
lim -hI D2(h) = J Ih(X,O)J(x,r)dsT(x) (10)
h~O g(X,T)=O
Making a Taylor expansion of the function g( x, r) we find
(12)
5 Asymptotics
In this section two results of the Laplace method for domain and surface integrals are given. Proofs can be
found in [5]. We consider integrals of the form
(18)
(19)
with
and the n x (n - 1) matrix A(x') = (al(x'), ... , an_leX')) is composod of vectors ai(x'), which constitute
an orthonormal basis of the tangential space of G at x'. The n x n-matrix C(x') is the cofactor matrix of
the n X n-matrix H(x').
In the same wayan approximation for surface integrals can be derived.
(21)
If there is given an arbitrary probability density f : JRn --t JR, x 1-+ f( x) with f( x) > 0 for all x E F C JR n ,
the integral can be written in the following form
1= J exp(ln(f(x)))dx. (22)
F
If the probability content of F is small we can assume, that the density f( x) is also small in F and that
then the logarithm lex) = In(f(x)) is negative for all points in F. Defining
(23)
Here is defined
(25)
47
we have Laplace-type-integrals. For the asymptotic behavior of this function for {3 -> 00 the usual methods
can be applied.
H there is only one point :r:. on the boundary G, where the integrand is maximal, under some regularity
conditions (see [4) and [5]) the following approximation for the failure probability is obtained
The partial derivative P8(F» with respect to Bi is by interchanging differentiation and integration
The partial elasticity £i(P8(F) of the probability P8(F) with respect to Bi is then:
Bi J 8l(:r:, 8)
£i (P8 ()
F = P8(F) ~exp(l(z,8»d:r: (31)
F
As scaling factor a suitable {30 is chosen, for example {30 = v'- maxFl(:r:, 8*) for a 8* with maxFl(z, 8*) < O.
Replacing now in the integral the function l(z,8) by h(z,8) = 1(:r:,8)/{3~, we obtain
and in the same way for the partial elasticity with respect to Bj
Bi J 2 oh(:r:, 8)
£i (P8 (F » =P 2
(F) {30~exp({3oh(:r:,8»dz. (33)
8 F •
We assume now that there is only one point :r' on the boundary of F, where the loglikehood function
achieves its global maximum with respect to F. For the partial derivatives and elasticities then the following
asymptotic approximations are obtained using equation 18
The last relation in the equation above is found by comparing the equations 18 and 21:
8 Example
Let be given two independent random variables Xl and X 2 , each with a lognormal distribution with
JE(ln(X;)) = /l and var(X;) = 0"2 for i = 1,2. The joint probability density function f(xb X2) of these
random variables is then
( 46)
( 47)
(48)
(49)
(50)
(51)
At the point (-y, ')'), the only global maximum of the loglikelihood function, we obtain for the norm of these
vectors:
(54)
we get then
(55)
The gradient of I at (')',')') is parallel to the vector (1,1), therefore a unit vector orthogonal to this vector is
the vector al = (1/...;'2, -1/...;'2). This gives
(56)
(57)
50
Now, using the approximation formula 27, we obtain as approximation for the failure probability the following
The partial derivatives of the loglikelihood function at (,)" 1') with respect to the parameters JL and 0' are:
(60)
(61)
2 In(')')-JL
f,,(PA(F)) JL 0'2 (64)
In(1') - JL
f,,(P(F)) 2JL • 0'2 (67)
In this paper asymptotic approximations for the sensitivity of the failure probability to changes in the
parameters are derived. The parameter influence can be of a general form, i.e. the probability density and
the limit state function may depend on the parameters.
The results give only approximations. They can be improved by using suitable numerical or Monte-Carlo
integration techniques.
51
References
[1] M. Abramowitz and LA. Stegun. Handbook of Mathematical Functions. Dover, New York, 1965.
[2] N. Bleistein and R.A. Handelsman. Asymptotic Expansions of Integrals. Holt, Rinehart and Winston,
New York, N.Y., 1975.
[3] K. Breitung. Asymptotic approximations for multinormal integrals. Journal of the Engineering Me-
chanics Division ASCE, 110(3):357-366, 1984.
[4] K. Breitung. Asymptotic approximations for probability integrals. Journal of Probabilistic Engineering
Mechanics, 4(4):187-190,1989.
[5] K. Breitung. Asymptotische Approximationen fur Wahrscheinlichkeitsintegrale. 1990. Habilitations-
schrift eingereicht an der Fakultat fiir Philosophie, Wissenschaftstheorie und Statistik der Ludwig-
Maximilians-Universitat Miinchen.
[6] K. Breitung. Probability approximations by loglikelihood maximization. 1990. Submitted to the Journal
of the Engineering Mechanics Division, ASCE.
[7] A.C. Chiang. Fundamental Methods of Mathematical Economics. McGraw-Hill International Book
Company, Tokyo, third edition, 1984.
[8] A. Der Kiureghian. Measures of structural safety under imperfect states of knowledge. Journal of the
Engineering Mechanics Division ASCE, 115(5):1119-1140, 1989.
[9] W. Fleming. Functions of Several Variables. Springer, New York, 1977.
[10] M. Hohenbichler. Mathematische Grundlagen der Zuverliissigkeitstheorie erster Ordnung und einige
Erweiterungen. PhD thesis, Technical University of Munich, Munich, FRGermany, 1984.
F. Vasco Costa
Consulmar, Rua Joaquim A. Aguiar, 27
1000 Lisbon, Portugal
ABSTRACT
INTRODUCTION
will depend not only an the reliability of the information available an the
that can result from the possible modes and degrees of damage and, last but
nat the least, how much it will cost to reduce the risks of being reached
low resistances are known. and monetary values can be attributed to the
all direct and indirect expenses that can result from distinct modes and
It is hoped that not only designers but all who have to participate in the
being too costly. will find the expectation ratio a convenient concept in
vary from a structure to another, even when submitted to the same actions.
For instance, if along a river some dykes are built to protect plantations
against floods and other dykes to protect densely inhabited regions, the
the safety of the dykes for the protection of the inhabited regione, even
if at the cost of reducing the safety of the dykes for the protection of
10- 6 per year (Baecher, et al. 1980, pg. 455; Burke, 1982. pg. 129).
The statement that in an ideally designed structure all its elements have
literature (Sorensen and Jensen, 1985, pg. 63). In fact, as the consequen-
take into due account the specific functions to be fulfilled and the
compare alternative designs without having to put tag prices on human and
large amounts of money, the use of the utility concept did not spread as it
But can risks involving human and social values be dealt with as if they
We all know that some people are more willing than others to incur in
or not in a hurry and on being the designer, the owner, the operator or the
on how we can perceive and react to risks se.e "Technological Risks", (Lind,
ed, 1982), "Structural Safety" (Ferry and Castanheta, 1985) and "Levels of
The use of the concept of "expectation", meaning by that the product of the
that of the simple probability for the selection of the degree of safety
57
and costs in a common unit, be it dollars, hours, lives or any other unit,
Expectation of benefits
E (1)
Expectation of costs
Sure benefits are to be included in the numerator and sure costs in the
denominator of the fraction (1) taking into account that their probabilities
are equal to 1.
human values. This assuming that personal biases will equally affect the
occasions, will not likely appraise with a same criterion, large and small,
sure and probable, present and future benefits or costs, if human or social
values are involved. But if one person apppraises all of them in a given
occasion, the values found for the ratio of the sum of sure plus probable
benefits to the sum of sure plus probable costs will certainly be less
affected by personal bias than the values attributed to each single benefit
dividing the benefits and costs expected for each period of time the
structural n
system will be in operationn by (1 + r) , being r the rate of
interest and n the number of periods of time the system has been kept in
service (Baecher et al. 1980, pg. 450, and Vasco Costa, 1987, pg. 73).
pg. 451), as a tool to help to decide when the construction of a new dam is
to be authorized.
evaluated taking into account not only the particular functions each of
them will have to fulfil but, as well, how their eventual failuTe will
affect the behaviour of others, how failures will propagate once started,
59
how much will repairs cost and for how long will the structure, for each
task, (Lind, 1982, GUedes-Soares and Viana, 1985, and Svenson and Fischhof,
1985). But there can be no doubt that the taking into account of such
can also vary between wide limits, from a few dollars to several billion
considered. In general, the larger the benefits or the costs that can
result fron the occurrence of an event, the smaller will be the probability
of such occurrence.
below unity will not be economically profitable and, on the other hand,
The use of the expectation ratio concept will present particular interest
when alternative designs are to be compared, the more reliable being more
into account the expectations of only the extra benefits to be gained and
road or to build one more road? Does it pay to design a structure with
not vary between such wide limits as the values of acceptable probabilities
hoped that the selection of acceptable expectation ratios will not pose
probabilities of failure.
61
FINAL CONSIDERATIONS
The scope of engineering design is to put to the best use for society the
resources, the means and the information available. In the case of the
elements and, last but not the least, estimates of all direct and indirect
expenses incurred in case of being reached the possible modes and degrees
of damage.
more often, but, as their failure will not bring such serious consequences,
lighter, in order to put to the best use the resources, the means and
distinct modes and degrees of failures and, what shall never be forgotten,
how much will increases in safety imply increases in weight and in costs.
62
REFERENCES
Vasco Costa, F.: PERCEPTION OF RISKS AND REACTION TO ACCIDENTS, Second IFIP
Introduction
The integrated analysis code system is NESSUS 1 (Numerical Evaluation of Stochastic Structures
Under Stress). NESSUS provides an integrated analysis system for automated calculations of the
probabilistic output in terms of user-defined random variables and random fields. The structural analysis
code uses perturbation algorithms which are used to define the sensitivity of the response variables to
changes in the random variables. A structural reliability code, referred to herein as the fast probability
integration (FPI) algorithm, combines this sensitivity data with the statistical models for each random
variable to predict the random response.
The paper briefly highlights the elements of the NESSUS system and the supporting algorithms.
Application of NESSUS to two significant, time-dependent problems is reported in order to validate the
capability of the NESSUS code.
The major tool used in structural reliability analysis of civil structures has been the stochastic finite
element method [1,2]. Stochastic FEM solutions are in the general class of first-order, second moment
formulations. First-order formulations make use of the first-order Taylor series expansion of the stiffness
matrix in terms of each of the independent random variables {x}, as shown in Eq. (1)
(1)
where the subscript-O denotes the deterministic value and E{} is the expectation operator. The coupled set
of equations giving the variance in the displacement solution is given by
(2)
for the case of no cross-correlation between the independent random variables, {x}. The resulting systems
of equations are then solved for the expected displacements and the second moment (variance) of the
displacements. The approximation of first order dependencies for the random variables is only valid if the
variance in each random variable is small (e.g., a few percent). It should also be noted that the stochastic
FEM solution does not make use of any information on the type of distributions for the random variables
and the solution variables. Thus the method is suitable only for approximate random variable assessments,
valid near the mean value of the solution.
Relatively recent work in reliability analysis has focused on the development of rapid estimation
algorithms for integrating the volume under the probability density function for a multivariable problem
[3, 4, and 5]. Figure 1 illustrates a joint probability density function for two variables, Xl and Xz. The
two limiting curves, Z(x)=Constant, represent performance function levels, such as displacement or stress
magnitudes, natural frequencies, etc., given by a response model of the system and a specified limit
condition on the response.
The probability that the response exceeds the specified limit condition is computed by integrating
the volume under the joint-PDF surface beyond the appropriate Z(x)=Constant limit state. Monte Carlo
simulation estimates this volume by sampling a number of solutions to determine how many are beyond
the limit curve, relative to the number inside the limit curve. The volume under this surface may be
estimated first by replacing the actual variables by "reduced" or standard normal variables
(x.-~.)
u. - _'__
' (3)
, (Ji
65
The reduced variable formulation assures that the operations to estimate the probability level or reliability
are invariant with respect to the actual form of Z(x)=Constant used [6]. When the physical variables are
rewritten in this form, the joint PDF for the problem in Figure 1 may be seen in projection as a set of
concentric circles of constant (J (standard deviations) as shown in Figure 2.
The response surface g(u)=O in Figure 2 is generally a nonlinear function of the two random
variables and results from mapping Z(x)=Constant into the new variable space. The most probable point
(MPP) is given by the point on g(u)=O which is closest to the origin. This is usually determined by fitting
a local tangent to g(u) and moving this tangent until the MPP is estimated [3, 4]. If the joint probability
density functions for each random variable are normal distributions, the probability of exceeding the g(u)=O
limit state is estimated by fitting g(u) by a hyperplane (First Order Reliability Method; FORM), or by a
hyper-quadratic surface (Second Order Reliability Method; SORM) and computing the distance p from the
origin to the MPP. Pis the reliability index for the limit state. For the case of normal distributions and a
linear limit state g(u)=O, p is directly related to the probability of exceeding the limit state
where 41( -P) is the cumulative distribution function for the normal distribution, evaluated at -po
When the distributions are not normal distributions, Reference [3] uses a normal mapping for the
variables that results in "equiValent" normal distributions for each variable. Calculation of p for the
mapped "equivalent" normal distributions provides an estimate of the probability of exceeding the limit
state through (4).
In the algorithm of Wu [5], the response surface Z(xJ is approximated to be linear or pure
quadratic (no mixed terms). The non-normal random variables may be mapped into new "equivalent"
normal distributions using the same normal mapping of [4], but modified by a third parameter taken from
66
o.::::;..---lf--+"r--+-- U ,
In (5), eIl(u) is the Gaussian distribution probability density function, serving as a weighting function, and
A is a constant for each of the "equivalent" nonnal distributions. Each A, and the values of ~,O" for the
equivalent cumulative distribution, Cll(u), are computed from (5) in order to minimize the error in the
variable mapping in the least squares sense. The resulting approximate g(u)=O is then fitted with a
hyperplane (FORM) to determine the MPP, as above. The value of Pis then computed for the MPP
distance, and the probability of exceeding g(u)=O is taken from (4).
In the NESSUS approach to probabilistic FEM analysis, the response function Z(X;) is
approximated by a set of finite element solutions near the current MPP. The NESSUS code was developed
in order to compute perturbed solutions near a detenninistic state in an efficient manner [7]. The
resulting set of solutions to Z(X;) can then be used to fit a hyperplane or hyperquadratic surface to Z.
After the fitting to Z(X;), the probability of exceeding a limit state for g(Z)=O is computed through the
equivalent normal variable mapping.
The NESSUS code utilizes two solution algorithms for linear problems: a standard displacement
method; and a mixed approach based on nodal variable equilibrium. The mixed approach solves the
simultaneous field problems of equilibrium in terms of nodal data. To do this, the strains and stresses are
projected to the nodes from the integration points. The mixed fonnulation used is based on an application
of the Hu-Washizu variational principle. The variational fonnulation is satisfied in an iterative manner as
described in [8] and has the following three-field form
67
o
[}, D
-C T
(6)
The set of relations in (6) might be solved directly, except for the substantial matrix size issues.
NESSUS uses an iteration strategy to obtain nodal equilibrium from (6). The deterministic stiffness matrix
is factorized to obtain a displacement update in the iteration sequence, given a set of nodal stresses
(7)
The strain projection to the node for the current displacement state is then given by
(8)
(9)
The system of equations is iteratively solved and updated until suitable convergence is achieved. At this
point, the nodal values of displacement and stress are obtained which "best" represent equilibrium for the
applied loading or displacement conditions.
Use of a nodal approach to probabilistic modeling is favored over the standard displacement
approach. In the standard approach there is an ambiguity as to what are the values of nodal stress for
which the probability level is to be assigned. In the NESSUS approach, there is no ambiguity, and the
values of nodal stress are seen to be defined consistent with overall equilibrium requirements as imposed
by the mixed formulation.
In the case of probabilistic modeling, we wish to obtain the solution of the equilibrium problem for
conditions that are perturbed with respect to each random variable. That is, the random problem is
represented as the following set of relations, where the hats denote random variables
(B,D,C)-(B,D,C)+A(B,D,C)
U-II+&I (18)
i-/+ll/
The perturbations in (10) are obtained first for the displacements by taking the effect of each random
68
(11)
where the strain and stress tenns for the {n+l} iterate are obtained using the same algorithm as for the
nodal equilibrium updating.
So long as the perturbations are sufficiently close to the original state, the convergence is generally
quite satisfactory. The NESSUS code allows the user to take perturbations on one or both sides of the
base state, and to take as many perturbations of each variable as desired. A hyperplane or a
hyperquadratic (no mixed tenns) surface is then fitted to the resulting perturbation data to define the
approximate response function, Z(xJ.
The use of a perturbation algorithm has been found to save significant time over a full
refactorization of the stiffness matrix. In the case of eigenvalue analysis, the perturbed solutions are
obtained from the base solution through sub-space iteration. Transient linear and nonlinear problems use
the same "equilibrium shift" as the above iteration scheme to update all of the transient matrices at each
load step.
The general NESSUS algorithm is to combine the Wu algorithm for estimating the probability of
excedance from an approximate Z(x) function [5], and to use the perturbation algorithm to develop a
database for obtaining Z(x) approximations about a specified design point.
The usual NESSUS solution begins by taking perturbations of the random variables about the
initial, mean value state (i.e., the deterministic solution). The response surface is fitted to the perturbation
data and the deterministic solution. Generally, a first order (hyperplane) fit is sufficient. The FPI
algorithm uses the approximation of Z(x) to compute the probability of excedance for various levels of the
response function.
Figure 3 illustrates the results for a beam vibration example, denoting the first-order solution as the
MVFO (mean value, first order) solution for the probability of exceeding various natural frequencies. A
normally distributed solution would fall on a straight line in this figure which uses the Gaussian
distribution scale for the vertical axis. The cross-data points are computed natural frequencies based on a
set of default probability levels.
Each discrete solution point has a defined MPP condition from the FORM algorithm described
above. The MPP consists of the set of random design variable values that correspond to the calculated
level of probability. These are the values of the random variables that are the ones most likely to occur
for that probability level.
The MVFO solution is not sufficiently accurate away from the deterministic solution, as can be
seen by comparison of the predicted distribution with the "exact" solution for this problem, obtained by an
analytical solution of the beam vibration problem with log-normal random variables. However, assuming
that the probability level remains constant, we can update the MVFO solution by substituting the MPP
69
+ MVFO
10%
='"
:iico
o AMVFO
.Q
e 2%
.,>
0.
~:J
E 0.1%
:J
U
0.003%
conditions into the system equations and perfonn a new solution of the resulting equations. The updating
procedure is referred to as the advanced-MVFO (AMVFO) solution [9]. It is seen in Figure 3 that the
updated solution is quite close to the exact solution.
Iteration at the current solution (design) point may be perfonned by requiring a new set of
perturbations at the MPP condition. The FPI algorithm can then be applied to the new approximated
response function to update the probability level. This process can be repeated as needed for convergence.
In practice, the result is generally sufficiently accurate that iterations are not needed. NESSUS permits the
user to allow the code to test for the need to perfonn iterations by computing local perturbations at each
discrete solution point.
The elastoplastic capability of NESSUS is based on the standard von Mises fonnulation with
isotropic, kinematic, and combined hardening rules. The elastoplastic curve is modeled as a piecewise
linear interpolation of effective stress versus effective plastic strain. Perturbation of the nonlinear solution
is obtained by iteration, as in the linear case, with the current histories for each random variable updated
based on the detenninistic, incremental solution.
The transient load history fonnulation is a Newmark-p algorithm modified for the mixed
variational fonnulation. Dynamic equilibrium at the end of each time step is used to estimate a new
displacement correction tenn in the iteration algorithm for the mixed method. Nodal displacements,
strains, and stresses are updated in the same manner as for the static problem.
The elastoplasticity validation problem is shown in Figure 4. The mesh consists of twenty equally
spaced axisymmetric ring elements loaded by an internal pressure applied incrementally. The internal
pressure magnitude and the yield stress are taken to be random variables. Zero strain hardening is
assumed.
70
P psi I Din
'0=20 in rjr::.
I· ,j/= lOin
.1
Figure 4: Thick-Walled Cylinder Model
The performance variable is taken to be the radial stress at 1.25 units out from the inner radius.
Figure 5 plots the COF for the radial stress at this radius in terms of the standard normal variable levels of
o. The MVFO solution is based on the sensitivity factors from NESSUS derived when the body is still
elastic. It is seen that the COF is accurately predicted only for the elastic regime (+<J). The exact solution
is obtained by Monte Carlo simulations of the known plasticity solution for this problem.
D
<
o PhaU'111
"
-,
The solution was perturbed at the MPP (design point) for two probability levels shown. Using the
MPP conditions from these perturbation solutions, two new incremental solutions have been obtained,
which now fall quite close to the Monte Carlo results.
71
The reason for the significant error in the AMVFO solution is seen in Figure 6, which plots the
probabilistic sensitivity factors taken to be the direction cosines of ~ in Figure 2. In the Phase I region,
the results are totally elastic and the sensitivity factors reflect little influence of the yield stress, and nearly
total (linear) dependence on the applied pressure. This is correct only for the +0" regime of the results.
For points in the regions denoted Phase IT and Phase ill, the results are increasingly dependent on
the effect of the yield stress and its variance. Figure 6 shows that the correct sensitivity factors for these
regions show a transition to dominance by the yield stress. The sensitivity results predicted by NESSUS at
the two iteration conditions show good agreement with the computed factors.
I.'
Phase III Phase II Phase I
1.0
MVFO PI
•
.
a MVFOoy
0
FPI ANALYTICAL Pi
FPI ANALYTICAL oy
0.6
• NES hillER Pi
>
• NES lsi ITER oy
~ 0 ..
0.2
-. -,
00
-3 -2
Two conclusions are drawn from this set of results. First, if the perturbations do not involve the
physics of part of the distribution (i.e., plasticity), then the answer will have significant error. Second, the
AMVFO solution algorithm, which provides an automatic updating and accuracy checking capability, does
converge to the correct result.
The second incremental solution validation problem is for a simply supported beam subject to a
moving point load. The stiffness and density of the beam, and the constant velocity of the load are taken
to be the random variables.
The analytical solution is taken to be the Euler-Bernoulli solution, appropriate for shallow beams.
While the NESSUS code uses a Timoshenko formulation, the shear effect is minimized by modeling a
shallow beam. The NESSUS model uses double nodes to capture the shear load shift as the point load
passes the node, ~=O.25, and y=O.50 (integration parameters). Two hundred time steps were used, and the
deterministic solution was stable and in excellent agreement with the analytical solution, as shown in
Figure 7.
72
'l's;l
4.0
• !
i
!
r-----------------
0.0
Ql>C -4.0
-8.0
I t= O.032s
- - ANAlYTICAl SOlUTION
a DISPLACDoIENT t.IElHOO
+ MDCED MElHOO
-~O~~~~~~~~~~~~~~~~~
0.0 10.0 20.0 30.0 40.0 SO.O
Z,ft
Figure 7: Comparison of NESSUS Beam Solution for Travelling Load with Exact
Solution (Rotation Angle at Left Hand Beam End)
The COF results for the beam left-hand end angle at a fixed rime point are shown in Figure 8,
again using standard normal variable plotting. Monte Carlo simulation of the exact beam equation is
shown, along with the NESSUS/FEM MVFO COF solution. The AMVFO solution for the -2.7a point is
also shown, and is in excellent agreement with the exact solution. The deterministic Monte Carlo solution
has been shifted by 0.0064 to agree with the NESSUS results at the mean value state.
3.0
+
o FEM IAVFO
1.0 rEM AMVro
:J 0.0
-1.0
-2.0
-3.0 -I-~--r''-r___,-r-~~_r........--.-r-.__~~-r_,
-24.0 -20.0 -16.0 -12.0 -8.0
8x ·'0--
Figure 8: COF Results for Beam End Rotation
Conclusions
The application of NESSUS to the probabilistic analysis of structural components demonstrates that
efficient and accurate predictions of the uncertainties in structural responses for transient problems is quite
accurate. The AMVFO algorithm is quite capable of accurate prediction of the full distribution so long as
the physical conditions in the COF are contained in the perturbation. For cases when that is not true, there
is strong evidence that the iteration algorithm which performs perturbations in the tail of the distribution is
quite capable of correcting the error in the AMVFO solution.
73
Acknowledgements
The authors wish to acknowledge the contributions of their coworker, Dr. Iustin Wu of Southwest
Research Institute. We also gratefully acknowledge the substantial support of Dr. C. C. Chamis, NASA
Project Engineer for the PSAM effort. This research is sponsored by NASA (LeRC) under contract NAS3-
24389.
References
1. A. Der Kiureghian and I.-B. Ke, "The Stochastic Finite Element Method in Structural Reliability,"
Probabilistic Engineering Mechanics, 3, 83-91 (1988).
3. R. Rackwitz and B. Fiessler, "Structural Reliability Under Combined Random Load Sequences,"
Computers & Structures, 9, 489-494 (1978).
4. X. Chen and N. C. Lind, "A New Algorithm for Structural Reliability Estimation," Structural Safety,
1, 169-176 (1983).
5. Y.-T. Wu, "Demonstration of a New, Fast Probability Integration Method for Reliability Analysis,"
Journal of Engineering for Industry, ASME Transactions, 109, 24-28 (1987).
6. A. M. Hasofer and N. C. Lind, "Exact and Invariant Second-Moment Code Format," Journal of the
Engineering Mechanics Division, ASCE, 100, 111-121 (1974).
8. S. Nakazawa, I. C. Nagtegaal, and O. C. Zienkiewicz, "Iterative Methods for Mixed Finite Element
Formulations," Hybrid and Mixed Finite Element Methods, AMD 74, ed. R. L. Spilker and K. W.
Reed, ASME, New York, NY (1985).
9. Y.-T. Wu, O. H. Burnside, and T. A. Cruse, "Probabilistic Methods for Structural Response Analysis,"
in Computational Mechanics of Probabilistic and Reliability Analysis, ed. W. K. Liu and T.
Belytschko, Elmepress International, Lausanne Switzerland (1989).
RELIABILITY-BASED SHAPE OPTIMIZATION
USING STOCHASTIC FINITE ELEMENT METHODS
1. Introduction
Application of first-order reliability methods FORM (see Madsen, Krenk & Lind [8]) in structural
design problems has attracted growing interest in recent years, see e.g. Frangopol [4], Murotsu,
Kishi, Okada, Yonezawa & Taguchi [9] and S0rensen [14]. In probabilistically based optimal design
of structural systems some of the quantities used in the modelling are modelled as stochastic vari-
ables. The stochastic variables are usually related to the strength, the loads or the mathematical
model. However, generally a more realistic modelling of some of the uncertain quantities is ob-
tained by using stochastic fields (e.g. loads and material parameters such as Young's modulus and
the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis
can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian &
Ke [6] and Liu & Der Kiureghian [7].
In this paper a reliability-based shape optimization problem is formulated with the total expected
cost as objective function and some requirements for the reliability measures (element or systems
reliability measures) as constraints, see section 2. As design variables sizing variables (diameters,
thicknesses etc.) and shape variables (geometrical variables) are used.
The shape optimization problem is formulated with requirements for element reliability indices in
the constraints. These element reliability measures are calculated with a stochastic finite element
program where the stochastic finite elements are modelled by stochastic variables through the
midpoint method (see section 3). Hence, the spatial random nature of material measures is taken
into account in the reliability calculations.
In section 4 methods to perform sensitivity analysis in an effective manner for both the reliability
analysis and the shape optimization are presented. A computer program is developed and finally,
in section 5, two simple examples are considered, namely 1) shape optimization of a corbel with a
stochastic vertical load and 2) optimization of the geometry of a hole in a plate.
76
A number of structural shape optimization problems based on reliability measures can be formu-
lated, see e.g. Sl!lrensen [13]. Here the shape optimization problem is formulated with element
reliability constraints.
where Z10 Z2,"', ZM are the optimization variables and (310 (32,"', (3N are element reliability indices.
(3,!,in, i = 1,2,·· . , N are the corresponding minimum acceptable element reliability indices: z'J'in
and z'J'GZ are simple lower and upper bounds of Zj • W(z) is the objective function which can
e.g. be the weight or the total expected cost of the structural system. The optimization problem
(1)-(3) is generally non- linear and non-convex.
Random variables X = (X1o X 2 ,···, Xn) are used to model uncertain quantities connected with
the description of the parameters in the response calculations. Failure elements are used to model
potential failure modes of the structural system. Each failure element is described by a failure
function: g(x, z) = 0 where z is the vector of optimization variables. Realizations x of X, where
g(x, z) ~ 0, correspond to failure states while realizations x where g(x, z) > 0 correspond to safe
states.
In FORM analysis, see e.g. Madsen, Krenk & Lind [8], a transformation T of the correlated
and non-normally distributed stochastic variables X into standardized and normally distributed
variables fJ = (U}, U2 ,···, Un) is defined (X = T(fJ)). In the u space the reliability index (3 is then
defined as,
(4)
The reliability index (3 is thus determined by solving an optimization problem with one constraint.
The optimization problem is generally non-convex and non-linear and can in principle be solved
using any general non-linear optimization algorithm, but the iteration algorithm developed by
Rackwitz and Fiessler see e.g. Madsen, Krenk & Lind [8] (based on sequential quadratic program-
ming) has shown to be effective in FORM analysis.
77
The optimization problem (1) - (3) can be solved using any general non-linear optimization algo-
rithm. In this paper the NLPQL algorithm developed by Schittkowski [10] is used in the examples
shown in section 5. The NLPQL algorithm is based on the optimization method by Han, Powell
and Wilson, see Gill, Murray & Wright [5]. Generally, it is a very effective method where each
iteration consists of two steps. The first step is determination of a search direction by 'solving a
quadratic optimization problem formed by a quadratic approximation of the Lagrange function of
the non-linear optimization problem and a linearization of the constraints at the current design
point. The second step is a line search with an augmented Lagrangian merit function.
NLPQL requires estimates of the gradients of the objective function and the constraints. If the
structural weight is used as objective function the gradients of W(z) are easily determined numer-
ically or analytically. Gradients of the reliability constraints, for which numerical estimation is
generally time-consuming, can be determined semi- analytically, as described in section 4.
Several approaches have been suggested to represent random fields by random variables in stochas-
tic finite elements. The most general, (and most numerically stable) method is to use the midpoint
method, see Der Kiureghian & Ke [6J. The randomness in a stochastic element is represented by
a stochastic variable Xi assigned to the centroid of a stochastic element:
Xi = Y(mi) (5)
where mi are the coordinates of the centroid in stochastic finite element no. i
The expected value, standard deviation and distribution of Xi are the same as those of Y(mi).
The correlation between the variables X = {XI, X 2, ••• , X n} is defined by:
(6)
In general the midpoint method implies that the variability of the field within the elements is
overestimated, see Der Kiureghian & Ke [6]. This can be avoided by selecting a finer stochastic el-
ement mesh. The stochastic mesh is generally determined according to the following two stochastic
element mesh generating rules:
1) The element mesh should be so fine that the fluctuation of the random field (measured by
the correlation length defined as the length L y over which the autocorrelation coefficient function
decreases to a small value, e.g. e- 1 ) can be represented. For the midpoint method a stochastic
element size from Ly / 4 to Ly /2 is recommended from experience in [6].
2) The elements must not be so small that high correlated stochastic variables of adjacent elements
cause numerical instability in the transformation (X = T(U)), see section 2.
From the above rules it is suggested in [6] that a FEM-mesh is selected so that it satisfies the
requirements for an ordinary FEM-mesh and the above-mentioned requirements based on the
fluctuation of the random field. The elements in the stochastic finite element mesh are then
selected so that each element is a block of one or more FEM-elements.
Further information concerning stochastic finite element mesh generation and mesh generation in
shape optimization can be obtained from the examples in section 5.
4. Sensitivity Analysis
From section 2 it seen that if first order optimization methods are used an important part of a
reliability-based shape optimization is to calculate the gradients of reliability indices and failure
functions with effective and fast methods, see also Sj2Irensen & Enevoldsen [11]. It is possible to
use an ordinary numerical differentiation but this is generally inaccurate and inefficient because
it requires a large number of stochastic finite element response calculations. It is faster and more
reliable to use semi-analytical gradient calculations as shown in the following.
Consider a failure element assigned to a critical point A placed in a finite element and a corre-
sponding stochastic finite element. The failure function and response formulas is written (linear
elasticity is assumed)
g(u,z,u[u,z,a(u,z)],a[u,z]) = 0 (7)
where u is a realization of the stochastic variables, g the failure function, z optimization variables,
79
a node displacements, (j stress vector for the point A, K stiffness matrix, P load vector, C material
matrix and Ii strain-displacement matrix, see Bathe [1).
dg;/ du j (needed in the reliability index calculations for solution of the problem in (4» for fixed z
is then calculated as
(10)
(11)
(12)
Notice that a and ::. in (11) are the local displacements and the derivatives of the local displace-
J
ments selected from the global displacements and from the derivatives of the global displacements
: : in (12). All the remaining partial derivatives in (10) - (12) can be determined analytically or
J
numerically.
df3;/dz j (needed in the solution of the shape optimal problem in (1) - (3» is calculated based on
a sensitivity analysis of the optimality conditions for the optimization problem in (4)
(13)
where
(14)
and
(15)
All the remaining derivatives in the above formulas are relatively easily calculated by numerical
differentiation. Hereby a considerable number of stiffness matrix assemblies and inversions can be
omitted compared to a simple numerical differentiation scheme, so a large reduction of computer
time is achieved.
80
5. Examples
In the following 2 examples problems are solved using a computer program developed for plane
problems using the methods and theories presented in the previous sections. An impression of the
program is given in figure 1 where the flow-chart of the program is seen.
Reliability subprogram
using stochastic
finite element methods.
Yes
Is m equal
L....._ _....:..:N.:::.o-< the number of element
reliability
constraints ?
No Is ,i+ 1 the
'-------'=-< optimal point?
The corbel is shown i figure 2. The loads on the corbel consist of a distributed load P acting under
an 60 degree angle.
Pj2
P/4 P/4
6..60 0
260
180
The shape optimization problem is formulated as described in section 2 with the weight of the
corbel (measured by the area for fixed thickness) as the objective function.
The geometry of the top side is kept constant. The remaining geometry is optimized using 5 design
variables z assigned to 5 moving directions in the 4 master nodes shown in figure 3 a. The geometry
between the governing master nodes is limited to circles or lines.
/1
a) b)
Figure 3. a) • Master nodes and optimization variables with moving directions. b) FEM-mesh
with assignment of constraints.
82
In the shape optimization problem (1) - (3), the constraints (2) ensure that the reliability is
satisfactory at the critical points ( One constraint is connected with each critical point Ai, i =
1,···,N)
This is a difficult task (particularly if the structure is large or complicated) and may be carried
out by preanalysis or simply by selection of many critical points. The critical points have to be
chosen in such a way that the critical points during the whole optimization process are included.
In this small model only 5 critical points and corresponding constraints are chosen, see figure 3 b,
where also the FEM mesh of 4-node plate elements is shown.
Reliability Analysis
The reliability indices Pi in the constraints are calculated with a failure function modelling yielding
failure by the von Mises yield criterion in a plane stress problem. The failure function is written:
(16)
where Z is a model uncertainty variable, Sp the yield stress and u"',u. and T"" the stress components
in u.
The poisson ratio v and the E-modulus are modelled as stochastic fields. Both processes are
assumed to be homogeneous and log-normally distributed. The autocorrelation coefficient functions
are calculated using
(17)
where mi"" mi, are the coordinates to the centroid in stochastic element i and Ly the correlation
length of the process.
The correlation lengths are assumed to be LE = 500 mm and L" = 250 mm. Further it is assumed
that the correlation between the two fields at the same point PII,E, = 0.7 and at two different points
p",Ej = Jp",E,P"jEj JPE,Ej P""'i" All other variables are modelled as uncorrelated.
From the above correlation lengths and the mesh rules in section 3.1 the stochastic finite element
mesh in figure 4 is obtained and used for both processes.
83
Figure 4. Stochastic finite element mesh. (Thin lines FEM, thick lines SFEM)
From figure 4 it is seen that the ordinary finite element mesh is divided into 4 stochastic elements
i.e. that the discretization of the two stochastic fields by the midpoint method gives 8 stochastic
variables.
In Table 1 the statistical characteristics of all 11 basic stochastic variables are shown.
The shape optimization problem is solved using the NLPQL-algorithm Schittkowski (10) and the
reliability calculations are performed using PRADSS, see Sj1jrensen (12). The initial and optimal
,8-values of the constraints and object function value W are seen in table 2 and the optimal
geometrical shape is seen in figure 5. The computer time was approximately 1 CPU-hour on a
VAX 8700.
Initial Optimum
fJI 4.31 4.00
fh 5.92 6.35
fJ3 8.75 7.09
fJ4 7.73 5.04
fJs 4.71 4.00
W 2.72.104 mm2 2.54.104 mm2
Design Evaluation
As mentioned above the assignment of constraints to critical points is essential in shape optimiza-
tion. In this analysis the assignment is evaluated by a reliability analysis of the final optimal
solution with a failure element placed in all finite element nodes. The results of this analysis were
that none of these element reliability indices is below 4.0 i.e. the assignment of constraints in figure
3 b is acceptable.
It is also important to perform a sensitivity analysis of the optimal design. A more detailed
sensitivity of the design due to changes in parameters in the structural reliability model and other
parameters in the model can be calculated according to the methods outlined in S!1Srensen &
Enevoldsen [111. Here only the sensitivities of the reliability indices in the two active constraints
with respect to the basic stochastic variables are calculated.
The solution to (4) is called U* and is written U* = (3 Q*. The elements in the Q* vector are
measures of the relative importance of the stochastic variables (the cr·-parameters of correlated
variables are combined in a euclidian norm and considered as one parameter). The cr* -parameters
are shown in table 3.
From table 3 it is seen that the stochastic variables of the E-modulus and the Poisson ratio is of a
certain significance but not dominating in this example.
85
Finally the ,B-values of the constraints are calculated using 3 x 3 stochastic elements instead of 2 x
2, see figure 4. Not surprising (due to the stochastic mesh generating rules and the low sensitivity
indicated in table 3 of the variables in the stochastic elements) this reliability analysis results in
nearly the same results as shown in table 2.
A quarter of a square plate with distributed edge loads and a central hole is considered, see figure
6. P
40
60
20
hI
a (thickness 10 mm)
Figure 6. Geometrical model of a plate with a central hole (measurements in mm).
When the material properties are modelled as stochastic fields the double symmetry of the plate
cannot exist. Nevertheless symmetry is assumed because of computing time considerations.
The optimization problem is formulated as in the previous example. The geometry of the hole
is optimized with two models, model 1) shown in figure 7 a with 2 optimization variables in to
2 master nodes and a geometry limited to an ellipse and model 2) shown in figure 7 b with 7
optimization variables in 7 master nodes without limitations in the geometry.
Model 1) is formulated based on the analytical deterministic solution of the optimization problem
(weight minimization with stress constraints) being an ellipse with the same ratio between the
principal axis (a/b) as between the edge loads, i.e. a/b = 1.5 at optimum in this case, see Braibant
& Fleury [2J.
In this problem it is easy to assign the constraints because the critical stress concentrations will
be at the edge of the hole. Hence 7 element reliability constraints are assigned to the 7 hole edge
nodes in the models with ,B'!'in = 4.0.
86
1 1
Figure 7 Optimization models: a) model 1 with 2 optimization variables and b) model 2 with 7
optimization variables.
Reliability Analysis
The reliability analysis is performed as described in example 1. The correlation lengths are assumed
to be LE = 200 mm and L" = 100 mm. The load P is assumed to be normally distributed with
38.103 N as the expected value and 0.1 as the coefficient of variation. The finite element mesh is
divided into a stochastic finite element mesh with 4 elements used for both stochastic fields, see
figure 8.
Figure 8. Stochastic finite element mesh. (Thin lines FEM, thick lines SFEM)
The results of the shape optimization is seen in table 4 and the optimal geometry of the hole in
figure 9.
87
-----------------+-
---Modell
·············ModeI2
Not surprisingly it is seen that the largest hole is obtained using model 2 and that the optimal
hole in the reliability-based optimization using stochastic finite element methods is not exactly an
ellipse. The relation between the axes a and b is after all nearly the same as in the analytical
deterministic solution.
6. Conclusions
Reliability-based shape optimization problems are formulated with requirements on element reli-
ability measures. The Structures are modelled by stochastic variables and stochastic fields. The
element reliability indices are determined based on a discretization of the stochastic fields using
the stochastic finite element method with the midpoint method.
Further it is shown how sensitivity analysis in both the stochastic finite element response calcula-
tions and in the reliability calculations can be performed in a numerically efficient manner.
88
7. References
[1) Bathe K.-J. : Finite Element Procedures in Engineering Analysis. Prentice-Hall, 1982.
[2) Braibant V. & C. Fleury: An Approzimation-Concepts Approach to Shape Optimal Design. Com-
puter Methods in Applied Mechanics and Engineering, Vol. 53 pp. 119-148, 1985.
[3) Enevoldsen, Ib, J.D. Sl!Irensen & P. Thoft-Christensen: Shape Optimization of Mono-Tower Off-
shore Platform. Presented at OPTI 89 Conference on "Computer Aided Optimum Design of
Structures", Southampton, 1989.
[4) Frangopol, D. M.: Sensitivity of Reliability-Based Optimum Design. ASCE, Journal of Structural
Engineering, Vol. 111, No.8, 1985.
[5) Gill, P. E., W. Murray & M. H. Wright: Practical Optimization. Academic Press, Inc. 1981.
[6) Der Kiureghian, A. & J.-B. Ke: The Stochastic Finite Element Method in Structural Reliability.
Probabilistic Engineering Mechanics, Vol. 3. No.2. 1988.
[7) Liu, P.-L. & A. Der Kiureghian: Finite Element Reliability Methods For Geometrically Nonlinear
Stochastic Structures. Report No. UCB/SEMM-89/05 Dept. of Civ. Eng. Berkeley, California.
1989.
[8) Madsen, H. 0., S. Krenk & N. C. Lind: Methods of Structural Safety. Prentice Hall, 1986.
[9) Murotsu,Y., M. Kishi, H. Okada, M. Yonezawa & K. Taguchi : Probabilistically Optimum Design
of frame Structure. 11 tho IFIP Conf. on System Modelling and Optimization, Springer-Verlag.
pp 545-554, 1984
[10) Schittkowski, K.: NLPQL: A FORTRAN Subroutine Solving Constrained Non-Linear Programming
Problems. Annals of Operations Research, 1986.
[11) Sl!Irensen, J. D. & Ib Enevoldsen: Sensitivity Analysis In Reliability-Based Shape Optimization.
Presented at the NATO Advanced Study Institute on "Optimization and Decision Support
Systems in Civil Engineering", Edinburgh. 1989.
[12) Sl!Irensen, J. D.: PRADSS: Program for Reliability Analysis and Design of Structural Systems.
Structural Reliability Theory, Paper No. 36, The University of Aalborg, Denmark, 1987.
[13) Sl!Irensen, J. D. : Reliability-Based Optimization of Structural Systems. 13th IFIP Conference on
"System Modelling and Optimization", Tokyo, Japan, 1987.
[14) Sl!Irensen, J. D.: Probabilistic Design of Offshore Structural Systems. Proc. 5 th ASCE Spec.
Conf. pp 189-193, Virginia, 1988.
[15) Vanmarcke, E. : Random Fields: Analysis and Synthesis. MIT Press, 1983.
CALIBRATION OF SEISMIC RELIABILITY MODELS
SUMMARY
I N T ROD U C T ION
Writers of seismic design codes often face the problem of establishing a com-
bination of design response spectra, nominal loads, resistances, load factors and
strength reduction factors, such that the structures that result from their use are
characterized by "adequate" safety levels, in the sense that either failure proba-
bilities are sufficiently small or they correspond to an optimum balance between
expected consequences and construction and maintenance costs. Significant research
efforts have been devoted to improving the criteria and tools for seismic hazard
assessment; however, very little attention has been paid to the estimation of the
reliability of complex nonlinear systems, designed in accordance with given rules,
subjected to earthquakes of given intensities. These studies (Esteva and Ruiz,
1989; Esteva et aI, 1989) have produced criteria and algorithms for computing
reliabilities, which open the door to systematic studies of the problem, but their
usefulness in practice is still limited, because of the sensitivity of their
resul ts to the force-deformation properties of the structural members and to the
conditions assumed to give place to system failure. In addition, the computational
effort required by those algorithms precludes their applicability to parametric
studies dealing with detailed models of complex systems.
models with different degrees of refinement or from the predictions of one model
combined with observations on families of real structures. The final objective of
the approach proposed is that of developing a set of criteria and methods suitable
for use by engineers responsible of seismic safety standards, and providing them
the tools for a) computing reliabilities of structures belonging to specified
types or families, and b) transforming those reliabilities into values representa-
tive of those inferred from the calibration of computed reliabilities with damage
and failure rates observed in actual structures subjected to earthquakes.
ON EQUIVALENT MODELS
mately) linearly along the Fill 1 Structural models with different degrees of refinement
91
We first consider the case when a type D equivalent model is adopted. If the
linear response of the original system is replaced with the contribution of its
fundamental mode of vibration, the dynamic response of the equivalent system has to
satisfy the following equation
• 2
X + 2l;;px + P x -(Xx (ll
o
where x is the relative displacement of the mass with respect to the ground, l;; is
the fraction of critical damping, p and (X are respectively the frecuency and the
participation factor of the fundamental mode and x o
is the ground acceleration. p
and (X are related to K, M and Z (the stiffness matrix, the mass matrix and the
configuration of the fundamental mode) in accordance with well known expressions.
When dealing with a model type B1 (fig. 1b), the equivalence conditions are
a) the values of EI and EA (E = Young's modUlUS, A = cross-section area, I = moment
of inertia) of each column of the simplified system are equal to half the sum of
the corresponding values for all the columns of the original system at the same
story, b) the value of EIIL of each beam (L span) is equal to the sum of the
values for the beams at the same level and c) at each beam-column joint of the
simplified system is located a mass equal to half the total of the original system
at the same level.
In the cases of building frames where the most significant part of the re-
sponse arises from the fundamental vibration mode, and where the contribution of
the axial deformations of columns may be disregarded, it may be advantageous to
introduce simplifications that permit to define i1taIuj ~ (ratios of shear
to lateral deformation), which are independent of the along-height distribution of
lateral forces, instead of working with the full stiffness matrix (Rosenblueth and
Esteva, 1962). This simplifies the evaluation of the modal stiffness matrix, k.
92
First-order estimates of the mean and variance of the fundamental natural fre-
quency (or natural period) can be obtained by the standard theory (Benjamin and
Cornell, 1970), starting from the equation T = znv'iii7k, where m = ZTMZ and k =
ZTKZ. For this purpose, it is required to count with means and variances of m and
k:
pP x KZ (4)
y y
(5)
where Rand e are respectively the vectors of yielding moments of the plastic
hinges and of their angular deformations, and Wand 0 are respectively the vectors
of vertical loads acting on the beam centers and of their effective displacements.
Solving for p and obtaining its mean and variance is straightforward.
Let us consider two alternate models used to estimate the dynamic response and
performance (damage level) of a structural system subjected to seismic ground
motion. Both models differ in the degree of detail and accuracy with which the
corresponding real system is represented. The geometrical and mechanical proper-
ties are obtained in accordance with previously established rules from the proper-
ties of a system intended to represent the real one as closely as possible, within
practical conditions and the state of the art.
Let us designate by A and S the models representing the real system in detail-
ed and simplified manner, respectively. In general, it will be of interest to
obtain rules transforming the predictions of model S into those that would by fur-
nished by model A. In our formulation, seismic ground motion is defined as a
stochastic process, and the properties of systems A and S may be either uncertain
or deterministically known. The behavior of these systems will be defined by the
~ ~ QA and Qs ' such that if any of them exceeds its critical value
(unity) the corresponding system will fail. Because of our previous assumptions,
for a system designed and built in accordance with given specifications and sub-
jected to an earthquake of a given intensity, QA and Qs will be random; i f the
94
forms, but not the parameters, of their probability density functions are assumed,
the problem to be solved is that of obtaining the transformation rules linking
those parameters. Those rules may vary with the ground motion intensity or may be
independent from it.
Concentrating our attention on the most general case, let us denote by EA and
E the vectors of parameters that define respectively the probability density func-
s
tions of QA and Qs ' and let us assume that they are related through the transforma-
tion matrix T, such that EA TEs' where all these matrices may be expressed as
functions of Y, the earthquake intensity (or vector of ground motion characteris-
tics including, in addition, frequency content, duration, etc.) normalized with
respect to the nominal seismic resistance, and H, a vector of parameters to be
estimated.
In this paper it is assumed that the samples of values of QA and Qs are gener-
ated by Monte Carlo simulation. If these samples are sufficiently large, their
probability density functions may be obtained in accordance with the methods of
classical statistical theory, their parameters EA and Es may be estimated within
narrow uncertainty margins in terms of Y and, under these circumstances, T will be
readily determined. If, as will often be the case, the sample of values of Q is
s
large, but that corresponding to QAis small, we will have to resort to small-sample
estimation theory: according to Bayes' theorem,
(6)
N N
p(Alh) TI?f (q Ih) IIp(Q >~ I h) (7)
1=1 QA 1 J=1 AJ AJ
95
Here, N1 and NO! are respectively the numbers of cases for which model A
produces point values and interval information about QA; ql' i = 1, ... , N1 are the
calculated values of QA; QAj is the behavior index associated to the j-th realiza-
tion of model A for which failure occurs, and ~Aj the corresponding critical
value.
f"(h)
H
= K/(h) ~
H t =1 JfQAt (qlh)gQAt (qld )dq
i
(8)
where fQAt is the probability density function of QA for the i-th system, given H
h. For a structure that fails the corresponding integral in the second member of
eq. 8 equals Joof (qlh)dq.
1 QAl
In the particular case studied in detail in this paper it is assumed that all
available observations about behavior of actual structures correspond to the same
earthquakes or to earthquakes of nearly the same intensity. Accordingly, we will
not be concerned with the variation of T with respect to intensity. More precise-
ly, we shall assume that T is formed by the set of unknown parameters H =T and
1 m
T such that
v
(9a,b)
Example
with the parameters given in the second and third columns of Table 1. The last
column of this table displays the damage levels experienced by those structures,
estimated by experts on the basis of Table 2, under the action of an earthquake
belonging to the family mentioned above. The prior probability distribution of (Hi'
H2 ) is taken as discrete and uniform at the points corresponding to all the pairs
of T 0.2, 0.5, 1.0, 2.0, 5.0 and T = 0.5, 1.0, 2.0.
m
For each pair of values of T and T the integrals in the second member of eq.
m y
8 are evaluated numerically for all the structures studied. For this purpose,
fQA1is taken as lognormal, with the parameters given by eqs. 9a and b. The poste-
rior probability distribution of T m
and Ty is the matrix given in Table 3. From
this information, the means of
TABLE 3. POSTERIOR DISTRIBUTION OF
T and Ty are 1.01 and 0.905, (T , T). VALUES OF P"
m m y I j
and their variation coefficients
Ty
T
0.099 and
0.3192, respectively. m
0.5 1.0 2.0
Also, E(T m2 Ty2 ) = 1.0225.
0.2 0 0 0
0.5 0 0.410xl0- i5 4.74 xl0- at
Consider now a new structure -2
1.0 0.24 0.724 2.067xl0
-6 -2
represented by a model of the 2.0 0 0.301xl0 1. 013xl0
5.0 0 0 8.562xl0- i2
type adopted for the calibration
presented above. Let Ei (Q) and var i Q be respectively the theoretical mean and
variance of Q (resulting from a conventional seismic reliability analysis). Then,
the marginal mean and variance of Qi accounting for the bayesian uncertainty about
T and T, can be obtained as follows (Parzen, 1962):
m y
CONCLUDING REMARKS
REF ERE N C E S
Esteva, L., Diaz, 0., Mendoza, E., and Quiroz, N. (1989), "Reliability Based
Design Requirements for Foundations of Buildings Subjected to Earthquakes",
1>rwc. San
Francisco
Esteva, L., and Ruiz, S.E. (1989), "Seismic Failure Rates of Multistory Frames" ,
&rgl? :Jaurtna1 at the 9't.ru.Lctwza.e 7)~, 115, 2, 268-284
1. INTRODUCTION
Single objective optimization has been the basic approach in most previous work on the design
of structural systems. The purpose was to seek optimal values of design variables which minimize
or maximize a specific single quantity termed objective function, while satisfying a variety of
behavioral and geometrical conditions, termed constraints. In this definition of structural opti-
mization, the quality of a structural system is evaluated using one criterion (e.g., total cost, total
weight, system reliability, system serviceability). In a recent survey, Levy and Lev [8] presented
a state-of-the-art review in the area of structural optimization. They stress developments in the
field of single objective optimization and acknowledge that in many design problems engineers
are confronted with alternatives that are conflicting in nature. For these problems, where the
quality of a design is evaluated using several competing criteria, vector optimization (also called
multiobjective, multicriterion or Pareto optimization) should be used.
Vector optimization of structural systems is an important idea that has only recently been
brought to the attention of the structural optimization community by Duckstein [1], Koski [7],
Osyczka [10], Murthy et al. [9], Frangopol and Klisinski [4], and Fu and Frangopol [6], among
others. It was shown that there are many structural design situations in which several conflicting
objectives should be considered. For example, a structural system is expected to be designed
such as both its total weight and maximum displacements be minimized. In such a situation, the
designer's goal is to minimize not a single objective but several objectives simultaneously.
The main difference between single and multiobjective (vector) optimization is that the latter
almost always gives not a single solution but a set of solutions. These solutions are called Pareto-
optimum, non-inferior or non-dominated solutions. If a point belongs to a Pareto set there is no
100
way of improving any objective without worsening at least another one. The advantage of the
vector optimization is that it allows to choose between different results and to find possibly the
best compromise. It requires, however, a considerable computation effort. In this paper, some of
the computational experience with vector optimization techniques for structural systems gained
recently at the University of Colorado, Boulder, is presented.
In the first part of this paper the mathematical formulation of vector optimization is reviewed
and three methods to calculate Pareto-optimum solutions are discussed. The second part describes
a vector optimization problem consisting of minimizing both volume and displacements of a given
structural system having elastic or elastic-perfectly plastic behavior. The third part of the work
contains illustrations of vector optimization of a 3-D truss system under single and multiple
loading conditions, followed by a discussion on the computational experience and the conclusions.
The design variable vector x belongs to the feasible set 0 defined by equality and inequality
constraints as follows
o = (x ERn : h(x) = 0, g(X) ~ 0) (4)
The image of the feasible set 0 in the objective function space is denoted by f (0).
IT the components of the objective vector f(x) are not fully independent there is not usually
a unique point at which all the objective functions reach their minima simultaneously. As pre-
viously mentioned, the solution of a vector optimization problem is called Pareto-optimum or
non-dominated solution. Two types of non-dominated solutions can be defined as follows [1]:
(a) a point XO EO is a weakly non-dominated solution if and only if there is no x EO such that
101
The main task of vector optimization is to find the set of strongly non-dominated solutions
(also called Pareto solutions or minimal curve) in the objective function space and the corre-
sponding values of the design variables (Pareto optimal curve) in the feasible region space.
There are a number of vector optimization solution techniques described in the literature (see
Duckstein [4], Koski [7], Osyczka [10], among others). Not all of them are, however, suitable for
structural optimization. Three solution methods were investigated at the University of Colorado,
Boulder: the weighting method, the minimax method and the constraint method. These methods
are widely discussed in [1], [7] and [10], among others.
The basic idea of the weighting method is to define the objective function F as a scalar product
of the weight vector wand the objective function vector f
F = w . f (8)
Without loss of generality the vector w can be normalized. The Pareto optimal set can be
theoretically obtained by varying the weight vector w. In the objective function space the single
objective function F is linear. For a fixed vector w, the optimization process results in a point
at which a hyperplane representing the single objective function is tangential to the image of the
feasible set f(O). Only if the set f(O) is convex is the weighting method able to generate all
the strongly non-dominated solutions. The second drawback of this technique is the difficulty
involved in choosing the proper weight factors. Since the shape of the image of the feasible set
102
f(O) is generally unknown it is almost impossible to predict where the solution will be located.
Sometimes the problem is also very sensitive to the variation of weights. In such a case the
weighting approach can prove to be unsatisfactory.
Another method for solving vector optimization problems is the minimax method described in
[7[. This method introduces distance norms into the objective function space. In this method the
reference point from which distances are measured is the so-called ideal solution. This solution
can be described by the vector
(9)
where all its components are obtained as the solutions of m single objective optimization problems
Generally, the ideal solution is not feasible, so it does not belong to the set f(O).
where the vectors wand f were previously defined. For a given vector w, the norm (11) cor-
responds to the search in a direction of some line starting from the ideal solution. If the other
norms are used, this approach generates the entire family of non-dominated solutions.
The minimax approach eliminates one drawback of the weighting method, because it is also
suitable for non-convex problems. It may be, however, also sensitive to the values of the weight
factors. The possibility of the prediction where the solution will be located is improved. To
use this method it is necessary to know first the ideal solution, what calls for solving m scalar
optimization problems: it is the price to pay for using this method.
Another alternative to the previous methods is the c-constraint method. The main idea of
this method is to convert m-l objective functions into constraints. This can be obtained by the
assumption that the values below some assumed levels for all these functions are satisfactory.
Without loss of generality it may be assumed that the components 12,/3,···, 1m of the objective
function vector will be constrained and only 11 will be minimized.
Let us define the concept of this method in mathematical terms. The original problem (1-4)
is replaced by
(12)
where
{x Ii SCi, 2, m} (13)
103
and
(14)
The entire Pareto set can be obtained by varying the E: vector. The constraint method applies
to non-convex problems and does not require any additional computations. It may be treated as
basic numerical technique in vector optimization.
There are also many other multiobjective optimization methods (see, ego [1]), but they are
less suitable for our purposes.
The described mathematical methods have been applied to minimize both the volume and
displacements for a given structure and assumed material behavior. Therefore, the objective
function vector has two major components
where V denotes the volume of the structure and d a displacement vector. Usually the vector d
contains maximum displacements of all nodes in two perpendicular directions
(16)
(17)
The optimization variable vector x includes cross-section areas of groups of the elements ai' The
allowable areas are limited by the minimum and maximum values
(18)
The material has to satisfy a constitutive relation and in case of the elastic behavior, stress
constraints
(19)
where q- and q+ are allowable compression and tension stresses, respectively.
Two types of material behavior have been considered: elastic and elastic-perfectly plastic. The
nonli.1,ear material behavior imposes more difficulties because of a larger computational effort. If
the material behavior is perfectly plastic, displacements may not be unique and, therefore, they
will be computed not for the ultimate load, but for the design load. The ultimate load is higher
104
than the design load because of the reserve strength of the structure. The ratio between these two
loads depends on the assumed value of the reserve strength factor. For reasonably constructed
structures the design displacements are elastic and, therefore, they are unique. Structures can be
optimized with respect to one or multiple load conditions, which means that in this latter case
the optimal structure has to satisfy all stress and displacement constraints for all load conditions.
All three optimization methods described in the previous section have been used to obtain
Pareto sets. In the next section, the results of the weighting method are illustrated with an
example considering a twenty-frve-bar space truss structure.
4. EXAMPLE
Three example problems (Le., a four-bar, a ten-bar, and a twenty-frve-bar truss) have been
solved by the authors to show how vector optimization can be applied to truss structures.
The material behavior was considered to be elastic in the four-bar and twenty-frve-bar truss
examples. In the ten-bar truss example the material is elastic-perfectly plastic. Two of the
examples (Le., the four-bar and twenty-frve-bar trusses) have been optimized for two load cases
and one (Le., the ten-bar truss) for a single load case. Owing to space limitations, this section is
restricted to the twenty-frve-bar space truss example.
Consider the twenty-frve-bar steel truss subjected to a single lateral load Q = 62.5 kips as
shown in Fig l(a). This structure has been optimized for minimum volume in [3]. Using the same
assumptions as in [3], except that now the behavior is assumed to be elastic (Le., -36 ksi ~ q ~
36 ksi), E= 10000 ksi and 0.lin2 ~ a; ~ 3.0in2, the vector optimization has been performed
by the weighting method. The truss members have been linked into eight groups of different
= ali A2 = a2 = as = a. = as; As = a6 = a7 = as = agi A. =
cross-sectional areas as follows: Al
alO = alli As = an = alS; A6 = a14 = al5 = al6 = al7i A7 = alS = al9 = a20 = a2li and As =
a22 = a2S = a2. = a2S, where ai = cross-sectional area of member i. The volume of the truss
together with the maximum horizontal displacement in the same direction as the applied load
Oz = D, have been considered as objectives
(20)
The results of the vector optimization are shown in Table 1, where the weight factors 1 and 2
are applied to the volume V and the horizontal displacement Oz = D, respectively. Fig. 2 shows
the image of the Pareto set in the objective function space.
Next, to extend the objective function space, one more load case (Le., load P = 40.4 kips)
has been considered as shown in Fig. l(b). In both load cases the maximum displacement in the
105
r-
100"
(a) I
~
100"
(b)
x '-c:;:"-~--ff;r-"
-=::::....::---~
y
12000.0
M
~
~
""'
;:,
....,
0 10000.0
>
8000.0
6000.0
4000.0
2000.0
0.0 -+-----.-----.----,-----,---,---,----1
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
IIOJllZONTAL D1SI'LACEMENT (ill)
3.5
"
~
...;
e; 3.0
is
2.5
2.0
1.5
1.0 '.
0.5
0.5 1.0 2.0 2.5 3.5
DlSPL. x, in
12000.0 -r------------------~
10000.0
8000.0
6000.0
4000.0
2000.0
0.0
0.5 1.0 1.5 2.0 2.5 3.0 3.5
(6.+ 6.)/2
.
R.
45
2-
""
40
r.l
~ 35
0
r..
30
INITIAL STRUCTURE
25 (VOL. = 3301.2)
20 OPTIMIZED STRUCTURE
(VOL. = 2088.8)
15
10
0
0 10 20 30 40 50 60 10
FORCE Q (kips)
same direction as the applied load constitutes an objective. These two displacements together
with the volume of the structure V are included in the objective vector
(21)
where bz and b~ are maximum horizontal displacements associated with load cases 1 and 2,
respectively. Table 2 shows the results of vector optimization for the two load cases for different
weight vector coefficients. As it can be seen, the combinations of these coefficients vary within
a wide range but all the non-dominated displacement solutions are very close to the diagonal
line representing the average value of the maximum displacements (Fig. 3). For this reason, the
average value of maximum displacements versus minimum volume has been plotted (Fig. 4).
In this example both load cases shown in Figs. l(a) - (b) can be considered as two extremes.
Once the load is applied from the direction of the x-axis (Fig. l(a)) and next from the direction
of the y-axis (Fig. l(b)). The load-carrying capacity interactions for the initial structure (AI =
A2 = ... = A8 = lin2) and for the optimized minimum volume truss (Le., neglecting the
displacement vector objective, see line 1 in Table 2) are plotted in Fig. 5. This figure shows what
can be expected when the twenty-five-bar space truss has to be designed for a variety of loading
situations: optimization reduces the load-carrying capacity for intermediate loading situations.
5. COMPUTATIONAL EXPERIENCE
Due to space limitations, this section contains only a brief summary of the computational
experience gained by the authors in solving vector optimization problems by using the weighting
and the c-constraint methods.
The weighting method showed all its drawbacks and the c-constraint method appeared to be
the most suitable for our purposes. If the weighting method is used, it is necessary to normalize
all the objective functions because of their different units. At the beginning, all criteria are
110
minimized separately and next the single objective function is defined as follows
F = w· f (22)
where the vector f has been normalized with respect to the minimum values of all objectives
(ideal solution)
/; = !;fIt (23)
The weight vector has been also normalized in such a way that
After normalization the best possible value of the objective function is equal to one and it can
be obtained if only a single criterion is optimized. In all other cases the value of this function
shows a distance from the ideal solution.
In the e-constraint method, the objective function vector contains one scalar component (i.e.,
the volume V) and the displacement vector d. It is, therefore, logical to convert the displacement
part of the objective function vector into constraints. The volume of the structure will be always
minimized, but the displacement constraints will change. In this way the entire Pareto set can be
obtained without considering the nonconvexity of the problem. What is very important is that
this approach has never failed and is, in our opinion, the simplest, most dependable and probably
the best possible for this type of problems. The best point to start is the ideal solution fO (see
Eq. 9). It provides lower and upper bounds for the displacements. Based on these bounds, the
grid of the displacement constraints can be constructed using, for example, equal subdivisions.
When this grid is known it simply defines the e-vector for each single objective task.
The minimax approach could not give better results than the e-constraint method. It is,
however, better than the weighting method and should be considered as the substitution for this
method whenever possible.
6. CONCLUSIONS
Vector optimization is a very useful approach in structural engineering when the design of a
structure has to satisfy conflicting objectives, such as minimum volume and displacements. The
computational experience gained at the University of Colorado, Boulder, has indicated that the
e-constraint method is the most appropriate technique for this class of problems. The weighting
method is perhaps easier to understand, but much harder to control, and cannot be applied to non-
convex problems. The computer cost associated with solving realistic multiobjective structural
optimization problems could be too high. It will be, therefore, necessary to improve algorithms,
111
limit as much as possible the decision space, and consider simplifications of material models.
Vector optimization could also be of use in reliability-based design [6) as well as for structural
design code-writers [5).
'T. ACKNOWLEDGEMENTS
This work was supported by the National Science Foundation under Grant No. MSM-8618108.
REFERENCES
1. Duckstein, L., "Multiobjective Optimization in Structural Design: The Model Choice Prob-
lem," New Directions in Optimum Structural Design, Eds. E. Atrek et aI., John Wiley,
1984.
2. Frangopol, D.M. and Klisinski, M., "Material Behavior and Optimum Design of Structural
Systems," Journal of Structural Engineering, ASCE, Vol. 115, No.7, 1989.
4. Frangopol, D.M. and Klisinski, M., "Vector Optimization of Structural Systems," Computer
Utilization in Structural Engineering, Ed. J.K. Nelson Jr., ASCE, 1989.
5. Frangopol, D.M. and Corotis, R.B. (Editors), "System Reliability in Structural Analysis,
Design and Optimization," Structural Safety, Special Issue, Vol. 7, Nos. 2-4, 1990.
9. Murthy, N.S., Gero, J.S. and Radford, D.A., "Multifunctional Material System Design,"
Journal of Structural Engineering, ASCE, Vol. 112, No. 11, 1986.
11. Rosenbrock, H.H., "An Automatic Method for Finding the Greatest or Least Value of a
Function," The Computer Journal, No.3, 1960.
MANAGEMENT OF STRUCTURAL SYSTEM RELIABILITY
1. INTRODUCTION
Great progress has been reported in the theory of structural reliability for
the past two decades. One milestone is marked by the important developments in the
theory of structural system reliability, due to the well known interest of engineers
and researchers in system behavior instead of only component failure events.
Over the lifetime of a structural system the reliability level of the intact
state should not be the only focus, although this has received a great deal of
attention of researchers. In its expected lifetime a structural system may
experience various types of deterioration or damage due to corrosion, fatigue and/or
fracture, accidental loss of structural material and capacity, to name a few events.
The deterioration and damage may result in a substantial decrease of the structural
reliability level as compared to the intact state. This is observed in many types
of structural systems such as aircrafts, highway bridges, offshore platforms, etc.
The theory of structural system reliability should address those issues such
as effective redundancy implementation, damage tolerability, optimal inspection
schedules, etc. These issues are related to the intermediate states of structural
system performance between the intact and ultimate failure states, which are
sketched in Fig.1. These states deserve more investigation in the development of
structural system reliability theory.
This paper discusses the above failure states in the context of structural
system reliability. This activity is herein referred to as the management of
114
where x is random variable vector which usually contains load effects and component
resistances; f(x) is the joint probability density function of vector x and G(x) is
the failure indicator function being unity when the system fails and zero when the
system survives. The integration of Eq. (1) is often difficult to analytically
obtain as the integration hyperspace defined by G(x)-l is usually extremely oddly
shaped. Fortunately, two candidates for approximation are available to provide
practically accurate solution to Eq.(l), namely bounding techniques and Monte Carlo
simulation [Dit1evsen 1979, Fu and Moses 1988]. A conventional system reliability
index fJ is also used, which is converted from Pf by the cumulative probability
function of the standard normal variable ~(.) as follows:
~(-fJ) - Pf (2)
where Pic is system reliability index given that component C is completely removed
(no longer serviceable). P in Eq. 3 is the system reliability of the intact system.
CRFC defined in Eq. (3) indicates the importance of the specific component to the
system in terms of system failure probability. A higher value of CRFC represents a
more critical component to the system. Based on the same concept above, a damage
tolerability factor (DTFCd) is also proposed here:
where Plcd is the reliability index of the conditional system failure given that
component C has experienced some amount but not total damage. This factor gives
unity when the damage state has no impact on the system at all or is therefore
perfectly tolerable. Lower values occur when damage states are less tolerable to
the system.
3. AN APPLICATION EXAMPLE
where coefficients Ci (i-I, ... ,4) depend on the girder positions described by the
transverse overhang length L' (see Fig.2b):
Cl C2 C3 C4
Girders NOT moved in L' - 0 2.250 3.000 0.750 3.000
Girders moved in L' 0.033L 2.304 3.210 0.696 3.210
Girders moved in L' - 0.067L 2.370 3.460 0.630 3.460
Girders moved in L' - 0.1 L 2.440 3.750 0.560 3.750
118
These two modes represent the symmetric significant failure mechanisms. The other
two possible failure modes with failure of either Rl, R3 and R4 or Rl, R2 and R4 are
not comparably significant, and therefore they are not included herein in the system
failure probability assessment.
Case I: No overhang for outer girders and equal girder sizes designed according
to the load effect of the internal girder. [This is conventional
practice.] (Fig. 2a)
Case II: Outer girders moved in and equal girder sizes with the same strength as
Case I. [This compares the influence of girder geometry but with no
change in girder costs from Case I.] (Fig. 2b)
Case III: Outer girders moved in and equal girder sizes with the strength based
according to the load effect. [This compares the influence of using
spacing geometry to optimize weight.] (Fig. 2b)
Case IV: Outer girders moved in and unequal girders designed according to
respective load effect. [This compares the influence of optimizing
geometry, spacing, and member sizes.] (Fig. 2c)
are plotted in Fig.4. It is observed that the conventional design with L' equal to
zero provides slightly higher reliability. For example, in Fig.4 the conventional
design of LFD gives system p equal to 5.59 for the span of 120ft and corresponding
alternative design with L' O.lL yields a value of 5.47. It should be noted that
the resistances (Rl to R4) of the alternative design with L' - O.lL are taken equal
to those of the conventional design (L'-O)in order to have an unbiased comparison.
This is done only for illustration since this alternative design requires less
member capacities girders because of their lower load effect. The savings in member
sizes attracts designers in order to reduce the construction cost. This is
especially occurring when contractors are permitted "value" engineering options to
change bid designs and codes do not contain system constraints. It is obvious that
the system reliability level would be even further decreased if the girder
capacities are designed for the (lower) load effect. This is also shown in Fig.4.
The resulted decrease of system reliability level is due to the reduction of reserve
strength when the outer girders are moved inwards.
Figs. 5 and 6, respectively for LFD and WSD, give more insight in the girder
position influence on system reliability, where external girders are moved by L', a
fraction of the bridge width L.
Tables 3a and 3b below display the damage tolerability factor of external and
internal girders of the conventional design (Fig.2a) respectively for damage levels
of 15% and 30% loss of component strengths. Such losses occurred in bridges either
due to material damage such as corrosion or frequently due to collisions with
girders of overpass structures.
These tables show relatively uniform changes over span lengths in system reliability
index fJ due to damage. Table 3 may be used when a practical quantification of fJ
decrease due to damage is needed. For the data used herein, for example, system fJ
is reduced by around 3% due to 15% damage on anyone of the girders (Table 3a). For
higher damage (Table 3b), a 30% strength reduction of an external girder leads to
around 12% decrease of system fJ, and that of an internal girder results in only
about 7% decrease of system fJ.
Fig.7 displays the eRF's (component redundancy factors), for both internal (R2
and R3) and external (Rl and R4) girders of the conventional design. These curves
reflect the impact of the total loss of the girders.
Higher eRF's indicate greater importance. The external girders appear more
important to the system as they contribute more in stability due to their geometric
positions and, in addition, they contain most of reserve strength capacity of the
system. The external girder loss is more likely caused by a collision of an
oversized vehicle, while fatigue failure is the maj or concern for the loss of the
internal girders.
(5)
above by Eq.(2). Similar application can be exercised for total failure probability
due to partial damage of components (not complete loss) by using Eq. (5) with all
conditional probability definitions also addressing damages such as corrosion.
The importance of internal and external girders are also changed in the new
design. The internal girders become more important in the unequal girder design
shown by their higher CRF's in Fig. 9. Some of the values in Fig.9 are above I,
which indicate that the residual reliability index fJ IC for the internal girder is
below zero. Thus, the structure is likely to collapse in the event of a single
member failure.
4. CONCLUSIONS
5. ACKNOWLEDGEMENTS
Supports for this work from the National Science Foundation (Grant ECE85-
16771) and Ohio Department of Transportation (A Reliability Analysis of Permit Loads
on Bridges) are gratefully appreciated.
6. REFERENCES
[1] AASHTO, Standard Specifications for Highway Bridges, 13th Washington, DC, USA,
1983
[3] Fu,G. & Moses,F. "Probabilistic Concepts of Redundancy and Tolerability for
Structural Systems" (to appear) Proc. ICOSSAR' 89 , San Francisco, CA, USA,
Aug. 1989
[4] FU,G & Moses,F. "Importance Sampling in Structural System Reliability", Proc.
5th ASCE Specialty Conference (Ed.) by P.D.Spanos, Blacksburg, VA, USA, May
25-27, 1988, p.340
[5] Fu,G. & Moses,F. "Lifetime System Reliability Models with Application to
Highway Bridges", (Ed.) by N.C.Lind, Proc. ICASP5, Vancouver, B.C., Canada,
May 25-29, 1987, p.71
[7] Moses,F. and Ghosn,M. "A Comprehensive Study of Bridge Loads and Reliability",
Report No.FHWA/OH-85/005, Department of Civil Engineering, Case Western
Reserve University, Cleveland, OH, Jan. 1985
[8] Working Group on Concepts and Techniques: Position Paper, New Directions in
Structural System Reliability (Ed.) by D.M.Frangopol, Proc. of A Workshop on
Research Needs for Applications of System Reliability Concepts and Techniques
in Structural Analysis, Design and Optimization, Boulder, CO, USA, Sept.12-14,
1988, p.363
123
Structural
System States
Addressed by
Structural
System Reliability
~
I
...--
~
I
I
Interm diate I
l
States Addressrd by
Structur~1
Component Reliability
\ /'
Intact~----~~~--~----~~
nVlronmen a
Hazard
Fig. 1 Intermediate States in
Structural Reliability Theory
124
r+h~
I
~~ I
D
II
I R
I
R
IR
L
L
4
~ ~~~
a) Conventional Design - Case I
L'
-r- I I I
L
I I I
o
OJ
(f)
\ , I
\ .. "
I
,-:~
I
o
4- +-'
C 'Q\ ~ ..\ ,
ill ~ \ \-... '.~ ~ \ ,~ o
C ~\ \.t ...,. - ~ I &! to
>,0 ..,. \ \~"'';, ~\ iI"."... ,--
..,cz"
U1.'
roU
\
\ ..
\
\ :1 ,-- .!:
+-'
01
c
\ '. \ I ~
(])u
0: §
\ '.
\ '. \. ! o
0'1
c
ro
\\
'\'
\\, \
Q.
(f)
.'
E \\
\ 1
,I
'\
ill '\ ~ o
+-' \ to
~I·
(f)
~
." ~
\\ I'
\
~
u
(J) \\
,.
\
1 '
I '
o
to C\JC'1
Optional designs
7.00 .----------------------------------------------------------,
L' = 10:1 L
6.50 \ t.,.sul.-----~ ..
---"
I\t\~---- ...... ··\;,.sU)
g>~~-- .... "W;"ed~\1\
In
+-'
6.00
_-- ----
-- _-- --~., .. ' (;\t'det'
convent.~na ----=-:-J
,~!!:!l~tUll~!.) ___ -
(l)
_---- ~~------. _------~v;d~i; tUll)
CD
E
(l)
5.50
--- _---- _---~~-------- Clrder
+-'
(f) 5.00
:>.
(j)
Case II
!\J
+-'
(]) 5.00
aJ
E
(])
+-'
(f)
» 4.50
(j)
4.00
3.50
30. 60. 90. '120. 150. 180.
soan lenqth (ft)
FIQ.6 System Beta vs. Girder
Positions (\lVSO)
7.00 , - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ,
Case II
6.50
6.00
!\J
+-'
(])
aJ 5.50
E
(])
+-'
(f) 5.00
»
(j)
4.50
4.00
--,
3.50
30. 60. 90. 120. 150. 180.
span length (ft)
127
0.70
C\I
[[ 0.60 -----cRf;,(IISOl--- ---------
D
c
iU
0.50
[[
cRF (LfD)
4- 0040 -~-----
0
LL .. CRF~· (lisD)·
[[ 0.30
U
0.20
0.10
30 60 90 120 150 180
span length <tt)
7
1 girderS (II5D)___ - - - - - - - - -
~~~---------
6 ----
iU
Case I
--- Equal girders (LfD)
+-' /-
(])
m 5 ------'-'"--
Unequal girders ..l~??
....................
...... .
E
(])
+-'
4
:- ~ -: . ~.~ ;::e_ ~v"~~ .:.~.~ .~~~~.~ .~~.~ __________________ -Unequa'- gl;::d-;';::s- elF-D). __
~ 3
(()
1.10
C\J 1.00
[[
CRFR2 (II5D)
u 0.90
c
..
(Ij
0.80
[[ 0.70
4-
0 0.60
LL 0.50
[[
U ----------
- -- -- - -- --CRfR~ (IIsD)
0.40
0.30 -
0.20
0.10
30 60 90 120 150 180
span length (ft)
6.50
QJ
[[ 5.50
E
QJ
......
~ 5.00
(j)
4.50
1 (50 years) 2 (25 years)5 (10 years) 10(5 years) 25(2 years)
SUMMARY
This paper provides a reliability analysis of existing bridge structures for earthquake loads. In the
reliability analysis, ultimate limit state is defined in terms of displacement instead of load effect or
stress. It is assumed that failure occurs when the maximum response displacement becomes larger
than the allowable displacement prescribed. The maximum displacement is calculated from the non-
stationary power spectrum of earthquake. This response analysis can include the effects of inelastic
behavior of material, natural frequency of structures, ground condition and seismic zoning. A numerical
example is presented to demonstrate the applicability of the method proposed here.
INTRODUCTION
In Japan, cO'nsideratiO'n fO'r earthquake is quite impO'rtant to' ensure the reliability O'f existing
structures during their lifetime. In the reliability analysis O'f the existing structures, we can utilize
mO're exact infO'rmatiO'n than that at the design stage. Namely, we can O'btain reliable infO'rmation
with respect to' the grO'und cO'nditiO'n, the change O'f traffic lO'ads, the O'ccurrence O'f earthquakes,
and the deteriO'ratiO'n O'f structural resistance, using variO'us data O'f O'bservatiO'n, field test and
labO'ratO'ry test.
In this paper, an attempt is made to' develO'P a methO'd O'f evaluating the reliability O'f existing
bridge structures fO'r earthquake lO'ads. In general, pier structures are likely to' suffer frO'm damage
due to' earthquakes. Hence, attentiO'n is paid to' the reliability analysis O'f bridge piers, especially
reinfO'rced cO'ncrete (RC) piers. In the reliability analysis, the ultimate limit state is merely cO'n-
sidered fO'r PO'ssible limit states. The limit state is specified in terms O'f displacement instead O'f
IO'ad effect O'r stress. It is assumed that failure O'ccurs when the maximum reSPO'nse displacement
130
becomes larger than the allowable displacement prescribed. This formulation is intended to take
into account the effect of plastic behavior of RC piers after the yielding of reinforcing steel in
the reliability assessment of earthquake resistant structures, without difficulty. The maximum
displacement is calculated from the non-stationary power spectrum of earthquake motion I). This
response analysis can include the effects of natural frequency of structures, ground condition and
seismic zoning in addition to the effect of inelastic behavior of material. A numerical example is
presented to demonstrate the applicability of the method proposed here.
In this paper, the response of structures is related to the intensity of non-stationary power
spectrum of input motion. As a representative parameter of structural response, maximum dis-
placement of the top of piers is employed instead of maximum force or stress, because failure of
the pier structure can be easily expressed in terms of displacement if the yield displacement and
ductility factor are known.
In the past studies 2) , the earthquake load is estimated on the basis of some approximate relation
between the maximum acceleration of ground and the maximum response of structures. However,
the structural response is greatly influenced by the natural frequency of the underlying structures.
Moreover, the duration of earthquake motion affects the failure path that RC structures reach their
failures through the successive process of alternative deformations due to the earthquake motion.
Considering these facts, it is desirable to take into consideration the non-stationary characteristic
of earthquake motion. Especially, when estimating the maximum response deformation, it is
preferable to use the non-stationary power spectrum instead of the maximum acceleration of input
motion, because the former can account for the spectral characteristic of input ground motion.
An attempt is made here to estimate the structural response which covers both the elastic
and inelastic regions, by paying attention to the intensity parameter of non-stationary power
spectrum. The acceleration of earthquake is written as follows, considering the non-stationary
characteristic 3),4) •
where GX(t,Wk) is the non-stationary power spectrum for circular frequency wand time t, tlw is the
discretization step of w, ¢>k is the phase of t = o. The square root of Gx(t, Wk) is given as
;0 :5 t :5 t,
(2)
t, < t
131
where cxm(l) is the maximum value of VGx(t,w) and its unit is gal/vrad/see, t.(I) is the starting
time, and tp(l) is the time from t.(I) to the appearance time of the maximum value of cxm(l). cxm(l)
is given by the following regression equation with respect to the frequency 1 (Hz), magnitude M,
and epicentral distance .:l .
(3)
where Bo(l), B1(I) and B 2(1) are the coefficients which have been given from the regression analysis
based on Japanese strong motion data and are functions of frequency I.
Since cxm(l) given by Eq. 3 denotes the intensity at the bedrock, it does not involve the effect
of local ground condition overlying bedrock. Using the conversion factor (3.(1)5), it is possible to
introduce its effect on nonlinear soil amplification into the estimation of intensity parameter on
soil surface. Then, (3.(1) is given as follows:
where Uo and UI are the functions of frequency I, ground parameter Sn and the depth to the bedrock
dp • Sn is evaluated from blow-counter profile. a~(I) denotes the folding point of stress-strain curve.
Consequently, the intensity parameter at the ground am,(I) is calculated as
(6)
In this paper, elasto-plastic analysis is performed by use of a bi-linear model. In general, many
degrading tri-linear model, i.e., Takeda model, Mutoh model and Fukuda model, have been used for
the yielding failure of reinforced concrete members. However, this study aims to show the efficiency
of the method that estimates the maximum response displacement. To make the discussion simple,
a bi-linear model is employed instead of the tri-linear models.
For the variation of ground motion levels, fifteen combinations of magnitude and epicentral
distance are examined. Table 1 presents the ground conditions used in the numerical computation.
Ground 1 in Table 1 denotes the bedrock. In the elasto-plastic response analysis, it is necessary to
determine the yielding displacement at the top of piers. However, the determination of the yielding
displacement is very difficult, because it is affected by the damage state of concrete. Using such
a characteristic that velocity response spectrum is constant regardless the natural frequency, it is
possible to calculate the yielding displacement Oy. Namely, Oy is estimated by dividing the response
velocity by the natural circular frequency. A parametric analysis is performed for eight cases
with V(veloeity) = 2.0, 5.0, 8.0, 10.0, 20.0, 30.0, 40.0 and 50.0 em/sec. For the natural frequency,
ten cases are considered as 10 = 0.2, 0.5, 0.7, 1.0, 2.0, 3.0, 4.0, 5.0, 7.0 and 10.0 Hz, and also
two dampig factors h = 0.05 and h = 0.1 are considered. Table 2 provides the comparison of the
132
proposed method and the method based on the maximum acceleration Am ••. The values in Table 2
are the correlation coefficients which are obtained by the both methods for the case of V = 30.0
em/sec. As the correlation coefficients are closer to unity, they are better solutions. The values with
symbol * shows that the method using the maximum acceleration presents better results than the
proposed method. From Table 2, it is seen that the proposed method prefers to the method using
the maximum acceleration when the natural frequency is less than 2.0 Hz. Although there are
several values with the symbol * in the cases of fo = 0.5 and 0.7 Hz, the differences are negligibly
small. This tendency is due to the fact that when fo is greater than 2.0 Hz, the structural response
strongly depends on ground motions in high frequency ranges, and hence that the maximum
acceleration shows a well correspondence to the maximum response displacement. It is also due to
that the assumption that the response velocity is constant is valid for the large frequency region.
Consequently, it is concluded that the proposed method is suitable for the analysis of structures
with long natural period such as towers of suspension bridge and high-rise buildings.
The relation between CJ:m,(fo) and Ym•x is influenced by the natural frequency, damping char-
acteristic and yielding displacement. The yielding displacement may be different for structures
with the same natural frequency. Hence, the relation between CJ:m.(fo) and Ym•• is calculated for
various values of natural frequency through the elasto-plastic analysis. Table 3 shows the inclining
angles of regression curve and the correlation coefficients obtained here. This table can be used to
estimate an approximate value of angle B with the aid of interpolation procedure.
The maximum response displacement of the top of pier structures can be easily calculated using
the value of Band Eq. 6.
(7)
In this paper, the analytical process shown in Fig. 1 is employed to evaluate the seismic reliability
of existing structures. The occurrence pattern of earthquake varies according to the location of
active fault. Therefore, seismic zoning is carried out by considering the effect of active faults. For
two big cities in Japan, i.e., Tokyo and Osaka, seismic zoning is performed using circles with the
same center and radial rays. In the zoning, the maximum value of epicentral distance is 300 km,
whereas the minimum values depend on each zoU:e.
The cumulative distribution function of earthquake magnitude FM(m) is given in the form of
the empirical law by Gutenberg- Richter 6).
h:O.l
B
f0 Ground 1 Ground 2 Ground 3 Ground 4
R
B o. 3664Et 1 ...............
0.4015Etl __ .... ..O.....4482Etl
_.............. O. 4870Etl
.......................
0.2
r o~ 9973····· O. 9974 o. 9985 o. 9964
o. 5 ....~ .... 9:Jm..... 9:mL ... 9:J~JL ... 9:&m.....
r o. 9917 0.9881 o. 9935 O. 9820
0.7 ....~ .... 9:§~~L ... 9:§~§L ... 9:§~~L ... 9:H9L ...
r 0.9811 O. 9822 o. 9897 0.9751
1.0 ....~ .... 9:mL ... O~·9805····
O. 4059 9:?&&L ...
. 0.9461 9:~m.....
r o. 9879 o. 91087
2.0 ....~ .... 9:Jm..... 9:mL ... 9:mL ...
O. 9894 o. 9625 O. 9827
um
O. 9959
.....
Response Analtsis
Maximum Displacement
YmG.1:
Reliability Analysis
Z = Yo - Yma~
PI : Failure Probability
where Mu and MUk are the maximum and minimum magnitudes in the k-th zone, and bk is the
b-value in the k-th zone. Eq. 8 denotes the cumulative distribution of magnitude of earthquake
occurred in the k-th zone. Denoting the area of the k-th zone by Ai, the occurrence number of
earthquake within t, years can be calculated as VkAkt,. Vi is the occurrence number of earthquake per
year. It is assumed that the magnitude of n earthquakes occurred in the k-th zone M l , M 2 , · · · , M"are
independent each other and follow the same probability distribution FMk(m). Then, the distribution
of the maximum value Z among Ml to M .. can be obtained as
..
Fz(z) = P,(Z ~ z) = p,(n M; ~ z) = {Fz(z)}" (9)
Replacing n by viAk t" the probability density function of the maximum magnitude within tT years
can be derived as
(10)
(ll)
where Au and AUk are the lower and upper bounds of epicentral distance, respectively. Using the
probability density functions given by Eqs. 10 and 11, the maximum response displacement Y maz
can be calculated based on the relation expressed by Eq. 7. In the calculation, the allowable dis-
placement Ya, earthquake magnitude M and epicentral distance A are treated as random variables,
while others are treated as deterministic quantities.
Here, failure probability is calculated based on AFOSM(Advanced First Order Second-Moment
Method) 7). The reason is that AFOSM can include any kind of random variable and deal with
non-normal variables as well as normal variables through a transformation from non-normal to
normal distributions. The limit state function used here is as follows:
(12)
where f3a(JO) can be given by Eqs. 4 and 5, and "m(JO) is given by Eq. 3.
NUMERICAL EXAMPLE
A numerical example is presented to describe the method developed here. In this example,
seismic reliability is evaluated paying attention to the response displacement of the top of piers, in
136
which the ductility factor and the coefficient of variation of allowable displacement are assumed to
be 4 and 0.1, respectively.
Consider a pier whose natural frequency period is 0.626 sec and yielding displacement is 0.618
em. Table 4 and Table 5 present the failure probabilities obtained for two big cities in Japan, i.e.,
Tokyo and Osaka. These results are calculated by AFOSM and FOSM(First Order Second-Moment
Method). The results by FOSM are compared with those by AFOSM. AFOSM are inefficient for
some cases where the design points needed in AFOSM can not exist beyond the lower bound of
epicentral distance. This phenomenon may be due to the fact that when the magnitude is rather
small, the design point for the variable of epicentral distance is smaller than the lower bound
prescribed by the past earthquake records. This leads to the underestimation of failure probability
in the underlying zone. For the reason, two failure probabilities are calculated, where the former
provides the upper bound and the latter provides the lower bound of failure probabilities. Namely,
true failure probabilities lie between these two probabilities.
In general, earthquakes occur in Tokyo more frequently than in Osaka. However, Table 4 shows
that the failure probability for Tokyo becomes smaller than that of Osaka after 100 years. One
reason of this result is that only several zones have higher occurrence rates of earthquake in Tokyo,
while all zones have uniform occurrence rates. Comparing the occurrence rates of some zones in
Tokyo and Osaka, it is evident that the zones in Tokyo have much more occurrence rates than
those of the zones in Osaka. Therefore, the failure probabilities within about 90 years in Tokyo is
smaller. However, after 100 years, the total failure probability over all Osaka region proceeds to
that of Tokyo. Another reason may be the approximation error with regards to the transformation
from non-normal variables to normal variables.
Comparing both the results by AFOSM and FOSM, there are significant differences for the
failure probabilities within 10 and 20 years. For these two cases, FOSM provides zero probabilities
whereas AFOSM provides 0.0393 and 0.0670. Naturally, the solutions by AFOSM are more reliable
than FOSM. This discrepancy may be due to such approximation employed in FOSM as Taylor
expansion for the linearization of limit sate function and the ignorance of the information regarding
the probability distribution function.
CONCLUSION
In this paper, an attempt was made to develop a simple method of evaluating the seismic reli-
ability of existing structures. To introduce the spectral characteristic of earthquake, the intensity
parameter of non-stationary spectrum was utilized for estimating the maximum response displace-
ment in a simple manner. The reliability analysis was performed paying attention to the response
displacement of the top of pier. This enables to make the calculation of elasto-plastic analysis sim-
137
pier. Needless to say, it is noted that this simplification must be checked by performing a sufficient
number of numerical calculation. Moreover, it is necessary to classify the structural behavior into
the elastic behavior and inelastic behavior. Using the classification, it is possible to obtain a more
reliable regression curve for the estimation of maximum response displacement.
REFERENCES
1) Kameda, H.: Evolutionary Spectra of Seismogram by Multifilter, Jour. of Eng., Mech., Div.,
ASCE, VoUOl, pp.787-801, 1975.
2) Kanda, J.: Safety Evaluation of Inelastic Building Structures in a Seismic Region, Proc. of
Korea-Japan Joint Seminar on Emerging Technologies in Structural Engineering and Mechanics,
pp.216-225, 1988.
3) Goto, H., Sugito, M., Kameda, H., Saito, H. and Ohtaki, K.: Prediction of Nonstationary
Earthquake Motions for Moderate and Great Earthquakes on Rock Surface, Annuals, Disaster
Prevention Research Institute, Kyoto Univ., No.27, B-2, pp.19-48, 1984.
4) Sugito, M. and Kameda, H.: Prediction of Nonsationary Earthquake Motions on Rock Surface,
Proc. of JSCE Structural Eng./Earthquake, Vol.2, No.2, pp.149-159, 1985.
5) Sugito, M., Goto, H. and Takeyama, S.: Conversion Factor between Earthquake Motion on Soil
Surface and Rock Surface with Nonlinear Soil Amplification Effect, Proc. of 7th Japan Earthquake
Engineering Symposium, pp.571-576, 1986.
6) Kameda, H. and Takagi, H.: Seismic Hazard Estimation Based on Non- Poisson Earthquake
Occurrence, The Memoirs of the Faculty of Eng., Kyoto Univ., Vo1.l3, pp.397-433, 1981.
7) Toft-Christensen, P. and Baker, M.J.: Structural Reliability Theory and Its Applications,
Springer-Verlag, 1982.
SENSITIVITY ANALYSIS OF STRUCTURES BY
VIRTUAL DISTORTION METHOD
1. Introduction
Deterministically and reliability based structural optimization are very active research areas. Some
of the reasons are the recent developments of effective mathematical optimization algorithms, such
as the NLPQL algorithm, Schittkowski [1] and the VMCWD algorithm, Powell [2], and of the
first-order reliability methods (FORM), see Madsen et al. [3]. Also the rapid growth of computing
power has been very important.
Most effective optimization algorithms require that the derivatives of the objective function and
the constraints are determined with high accuracy. Usually, quasi-analytical derivatives are used
in structural optimization, see Haftka [4].
The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers
an efficient approach to calculation of the sensitivity derivatives. This method has been orginally
applied to structural remodelling and collapse analysis, see Holnicki-Szulc & Gierlinski [5] and
Gierlinski & Holnicki-Szulc [6]. Also some aspect of using VDM to optimization has been discussed,
see Holnicki-Szulc & Gierlinski [7] and Holnicki-Szulc [8]. Calculation of the derivatives by VDM is
based on the same approach as employed in the structural remodelling. Responses corresponding
to two neighbouring modifications are calculated first, and then the derivatives of response with
respect to the design parameters are approximated using the finite difference technique. These
response derivatives are essential ingredients of the sensitivity derivatives, and the most expensive
to calculate.
In section 2 a deterministic structural optimization problem is considered and it is shown how
the derivatives of the structural response (displacements and local forces) can be estimated quasi-
analytically.
In section 3 reliability-based structural optimization problems are formulated. Some of the quanti-
ties modelling the structural system are assumed to be modelled by stochastic variables. Examples
are the yield stress and the loading (e.g. wind and wave loads). The reliability of the structural
system is measured by reliability indices estimated using FORM, see Madsen et al. [3]. It is shown
how the derivatives of the reliability indices with respect to the design variables can be determined.
VDM can be used to estimate the derivatives of the structural response in an effective way as
described in section 4. A brief description of the VDM theory in relation to the structural remo-
delling, and discussion of how this can be used to estimate the derivatives of the structural response
in an effective way are also given in section 4.
Finally, in section 5 a simple numerical example is shown.
140
If the ith constraint is related to the displacement of a given degree of freedom in a finite element
model of the structural system and V = (VI, V2 , ... , VN K) denotes the displacement degrees of
freedom then the ith constraint in (2) can be written
(4)
where Vmax is the maximum permissible displacement and I is the degree of freedom corresponding
to the ith constraint. Then
where K is the stiffness matrix with dimension N K x N K, P is the load vector and {.} I signifies
the Ith element in the vector.
It is usually cheap to estimate
•
g-;. and :~ (analytically or numerically) from a numerical point of
J J
VIew.
If the ith constraint for example models a stress constraint then it is written
bi(z, V(z)) =0 (6)
One "classical" method to estimate ~ is to use quasi-analytical derivatives, see Haftka [4] :
J
db i obi obi T oV
-=-+-=
dZj OZj oV -OZj
T - =
= obi + O~ K - \ oP _ oK V) (7)
OZj oV OZj OZj
At each iteration in a first order optimization algorithm (which requires function value and deri-
vatives) the number of solutions of the FEM problem (so-called large problem) are : (N + 1).
Another method to estimate ~ is the adjoint method :
J
(8)
141
At each iteration the number of solutions of FEM problems (large problems) is : (m + 1).
Which of these two methods is the most effective thus depends on the number of optimization
variables N and the number of constaints m.
(31, ... , (3m are the reliability indices of the m potential failure elements, (3'["in, i = 1,2, ... , m the
corresponding minimum reliability indices and (11) models M deterministic constraints.
The reliability indices are determined by, see Madsen et al. [3] :
where u = (U1, ... , un) are realizations of standardized normal stochastic variables U which are
connected with the stochastic variables X by X = T(U). X models the uncertainty related to the
quantities and models describing the load and/or strength of the structural elements. The stiffness
properties are assumed to be deterministic. Realizations "if of X where the failure function 9 :S 0
correspond to failure states, while realizations "if where 9 > 0 correspond to safe states.
Alternatively the reliability-based optimization problem can be formulated with a system reliability
constraint. In this case (10) is changed with the constraint
then the derivatives of the element reliability constraints with respect to the optimization variables
can be determined from
d(3i 1 [09i ~ Ogi Ob,,] (16)
dz·) = IV" g'l, oz') + "=1
L...Job"oz·)
where all terms are calculated at the design point (the solution point to the optimization problem
(13)). ~ and U; are usually estimated numerically or analytically without significant computer
costs. The term ~ is determined as described in section 2.
J
The derivatives of the system reliability constraint with respect to the optimization variables can
be determined from
d(38 m 0(3" O(3i m 0(3" 0Pi"
--~--+2~-- (17)
dz'J - L...J
i=1
0(3·' oz'
J
L...J oP'1:
i<1: '
oz'J
142
where 0'"
~zi
U J
is determined by (16). 0'"
~ 0'"=-
OPi' 0Pi.
and ==
OM,'.
OZj
can be estimated as descri b ed in Sorensen
[11].
NA
-
AjO'j L
= Aj[O'j + Ej L...J(Djj -
~
8jj )€j]
j=l
NA
= AjEj(€j - L8jj€j) (23)
j=l
143
NA
LBij((i)€i = -(iff (24)
j=l
where
Bij = (iDij + Dij
(i = (Ai - Ai)/Ai
Calculation of the components in the matrix D requires N A solutions of a finite element problem
(25)
where p(~) = f( €j = 1) is the load vector corresponding to the virtual distortion €j = 1 of member
"j" and V(j) is the vector of nodal displacements in the global coordinates. Denoting by d(ij) the
corresponding vector of local displacements in the ith member. The components of the influence
matrix D can now be calculated from
D .. _ d2 (ij) - d 1 (ij)
IJ - Ij (25)
(27)
oV ~V(j)
(28)
oAj ~Aj
db i obi obi T oV
- = -OZj
dAj
+-= -
oV oAj
(29)
It should be noted that generally Ii is a full matrix whereas the stiffness matrix K usually is
banded.
Performing the sensitivity analysis based on the VDM concept is particulary favourable when the
number of modified elements N A is relatively small with respect to the number of all structural
elements. This advantage is growing if the sensitivity calculations need to be repeated many times
in the optimization process.
144
5. Example
31.65m
~1000
s SWL
~
N
N
S
"'<ci
-
S
N ~
r-
/.
51.15m
A plane truss model of an offshore jacket structure is considered. The structural system and the
loading are shown in fig 1. Section properties of the truss elements are shown in table 1.
In table 2 sensitivity coefficients of the member forces £tare shown for all structural elements
with respect to changes in the cross- sectional area of the elements in group 2. The" classical"
method - eq. (7) is compared with estimates obtained with VDM using ~Ai = A;/lOOO.
Some small deviations in the sensitivity coefficients are obtained, but generally there is a good
agreement between the results obtained by the two methods.
In this example N K = 18. If all N = 7 groups of elements are modelled as design variables
then NA = 19. The "classical" method for estimating £t
requires (N+1)=8 solutions of FEM
equations with N K = 18 unknowns, whereas the VDM tec~ique requires at each iteration N + 1 =
8 solutions of the simulation problem with N A = 19 unknowns.
However, if only group 2 is considered as design variable then N = 1 and NA = 4. The "classical"
method requires 2 solutions of FEM equations with 18 unknowns, whereas the VDM technique
requires at each iteration 2 solutions of simulation problem with only 4 unknowns.
6. Conclusions
The following conclusions can be stated :
• It has been demonstrated that the Virtual Distortion Method can be succesfully applied to
estimation of displacement sensitivity coefficients (estimates of stress sensitivity coefficients
can also very easily be determined).
• The most costly computation is concerned with influence matrix D. In most cases the
following iterative procedure requires much less computer effort.
• Which of the three methods for estimating sensitivity coefficients is the most effective from
a computational point of view depends on the actual values of NK, N, m and NA.
146
• It is interesting to note that the influence matrix in the VDM approach is a function of
elastic properties and element topology of the structure. It can be calculated employing
either the displacement or the force method, and thus the VDM approach can be associated
with practically any existing software for truss or frame analysis.
7. References
[1] Schittkowski, K.: NLPQL : A FORTRAN Subroutine Solving Constrained Non-Linear Pro-
gramming Problems. Annals of Operations Research, 1986.
[2] Powell, M.J.D. : VMCWD : A Fortran Subroutine for Constrained Optimization. Report
DAMTP 1982/NA4, Cambridge University, England, 1982.
[3] Madsen, H.O., S. Krenk & N.C. Lind: Methods of Structural Safety. Prentice-Hall, 1986.
[4] Haftka. R.T. & H.P. Kamat : Elements of Structural Optimization. Martinus Nijhoff
Publishers, Dordrecht, 1985.
[5] Holnicki-Szulc, J. & J. T. Gierlinski : Structural Modifications Simulated by Virtual Distor-
tions. Int. J. Methods Eng., Vol. 20, pp. 645-666, 1989.
[6] Gierlinski, J.T. & J. Holnicki-Szulc : Progressive Collapse Analysis of Frames Using the
Virtual Distortion Method. In preparation.
[7] Holnicki-Szulc, J. & J.T. Gierlinski : Optimization of Skeletal Structures with Material
Nonlinearities. Proc. of the First Int. Conf. on Computer Aided Design of Structures,
Southampton, U.K., June 1989, pp. 209-220.
[8] Holnicki-Szulc, J. : Optimal Structural Remodelling - Simulation by Virtual Distortion
Method. Comm Applied Num. Methods, 1989.
[9] S!<'lrensen, J.D.: Probabilistic Design of Offshore Structural Systems. Proc. 5th ASCE Spec.
Conf., Virginia, 1988, pp. 189-193.
[10] S!<'lrensen, J.D. & P. Thoft-Christensen : Structural Optimization with Reliability Con-
straints. Proc. 12th IFIP Conf. on System Modelling and Optimization, Springer Verlag,
1986, pp. 876-885.
[11] S!<'lrensen, J.D. : Reliability Based Optimization of Structural Elements. Structural Relia-
bility Theory, Paper No. 18, The University of Aalborg, 1986.
RELIABILITY OF DANIELS SYSTEMS WITH LOCAL LOAD
SHARING SUBJECT TO RANDOM TIME DEPENDENT INPUTS
Mircea Grigoriu
Cornell University
Hollister Hall, Ithaca, NY 14853, U.S.A.
INTRODUCTION
Daniels systems consist of n parallel brittle fibers and can carry load in
damage states m = n, n-l, ... , 1 characterized by m unfailed fibers and n-m failed
fibers (Fig. 1). It is assumed that the distribution of the applied load among
fibers presents concentrations in a vicinity of failed fibers (local load sharing
rule) consistent with the stress distribution observed in composite and fiber-
reinforced materials (9). Most studies on the reliability of Daniels systems
involve elementary loading conditions, e.g, time-invariant and monotonic loads
(8,10). Dynamic loads began to be considered recently for Daniels systems with the
equal load sharing rule (1,2,3,4).
(1)
148
This paper develops a technique for estimating and bounding probability PF(T)
for Daniels systems with an elementary local load sharing rule and a relatively
small number of fibers. The technique is based on crossing characteristics of a
generalized Slepian's process, probability bounds for systems, and first and second
order reliability methods (FORM/SORM). The paper is an extension of recent work by
the author on dynamic Daniels systems with equal load sharing rule (4). It is
assumed that (i) resistances {Ril, i - 1, ... , n, are independent identically
distributed random variables; (ii) spatial distribution of fibers is uniform on
locations (l, 2, ... , nl. A particular spatial distribution of fiber resistances
is referred to as spatial configuration; (iii) load carried by a fiber prior to
failure is redistributed evenly between the nearest surviving fibers. Other more
complex load sharing rules can be considered (9). They can be directly incor-
porated in the proposed technique for the reliability analysis of dynamic Daniels
systems; (iv) X(T), T ~ 0, is a quasistatic Gaussian load process; and (v) fibers
are in tension with nearly unit probability and fail when load effect exceeds for
the first time fiber strength. Other failure conditions can also be incorporated
in analysis, e.g., failure caused by fatigue.
Numerical results for probability PF(T) and bounds on this probability are
presented for Daniels systems with n = 3 of deterministic and exponentially dis-
tributed resistances that are subject to a quasistatic uniformly modulated
Gaussian load process.
The fiber spatial configuration can significantly affect damage evolution and
reliability under local load sharing rules because stronger fibers can be subject
to load concentrations causing their failure prior to the failure of weaker fibers.
Thus, various spatial configurations may generate different failure paths depending
on the relative magnitude of fiber resistances and load concentra'tion factors.
Failure Paths
" <"R2 < ... < Rn
Let Rl " be the order statistics of fiber resistances {Ril, i
1, ... , n, (5). There are n!/2 distinct spatial configurations of these resis-
" " 1\ A 1\ 1\
tances because, e.g., configurations {Rl, R2, ... , Rn-l' Rnl and {Rn , Rn-l' ... ,
"
R2, "
Rll are indistinguishable from a mechanical viewpoint. Each spatial config-
"
uration of order resistances {Ril, i - I , 2, ... , n, can result in one or more
failure paths depending on the relative magnitude of the resistances. Figure 2
shows spatial configurations of fiber resistances and failure paths for a Daniels
149
Probability of Failure
Let qs be the number of failure paths associated with spatial c0gfiguration s
- 1. 2 •...• ns The total number of distinct failure paths is q - ES q. Since
s-l s
failure paths are disjoint events. probability of failure PF(~) in Eq. 1 can be
obtained from
(2)
PF .(~)
.J
- P [ r
m-l
Y
m.J
. < ~ ] (3)
in which Ym.j - the duration of damage state m along failure path j. Methods for
calculating these probabilities are examined in the next section.
E
jcJ'
PF.J·(~) Pj (4)
constitutes a lower bound on PF(~)' Consider a failure path jcJ and a damage
state mj - n. n-l •...• 1. Let
150
(5)
Then,
n
Y (6)
m,j
(7)
constitutes an upper bound on PF(~) for mj - n, n-l, ... 1. This bound involves
all incomplete failure paths of the system.
is the resultant force acting on the unfailed fibers of a Daniels system in damage
state m of failure path j. Time t originates at the beginning of damage state m
and random variable
n
Tm+l,j - r Yq,j (9)
q-m+l
denotes the total duration of damage states n, n-l, ... , m+l along failure path j.
Second-moment characteristics of Xm,j(t) follow directly from corresponding
descriptors of applied load X(~) and Eq. 8 for any given value of random time
Tm+l,j'
Critical Thresholds
Consider a damage state m of failure path jcJ o£ a Dajiels system subject to
quasistatic load X(~). Let
.0.
~,j' 0 <
0-1 m,
.0.
~,j ~ !
and r a j = I, be the load
concentration factor for a fiber of strength R.o. in Eh1s damage state. For example,
A A
these factors are 2/3 and 1/3 for the fibers of strength R2 and R3 of damage state
A A A
m - 2 and spatial configuration (Rl, R2, R3) of the Daniels systems in Fig. 2. A
151
fiber fails when load process am~j X(Yn,j + ... + Ym+l,j + t), t ~ 0, exceeds for
A
n
The probabilities of random variables Y j and E Y . depend on character-
m, m-m. m,J
istics of Xm,j(t) and ~m,j for m - n, n-l, ... , mj and ~re determined in the next
two sections. Thus, joint probabilistic characteristics of critical thresholds
m=3 m =2 m=1
m ill
X X X
"3 "-"3 "3 I .) 3 I I "-
R, R2 R3 I R2 R3 ¢ I I R3
x
I I
~X h +x
(a) Spatial Distribution {R I , R2 , R3 }
m =3 m =2 m =I
X X
'! "3 3'
X
,,I 2x X
'3 "3
"R, "R3 "R2 "R3 "-R2
I J I I
~X ~x
m=3 m= 2 m=I
X
3
"R2
I
~m,j are needed for estimating the reliability of ~aniels systems and can be
obtained from probabilities of order resistances {Ril, i - I , 2, ... , n.
- 1, 2, ... , An. AThen, th~ probabilities of Ri; {Ri' Rjl, i < j; {Ri' Rj' Rkl, i <
j < k; and (Rl, R2, ... , Rnl are, respectively,
n' i-I
fii,ij'~(Ui' u j ' ~) - (i-l)!(j-i-l)!(k-j-l)!(n-k)! [F(ui)J
* f(u.) f(u.)
l. J
f(~),
k
n
i (u l ' u 2 ' ... , un) = n! TI f(~) (10)
n k=l
(11)
Residence Periods
Consider a differentiable Gaussian process Vet), t ~ 0, with mean and variance
functions ~ - E Vet) and ~(t, s) - E(V(t) - ~(t))(V(s) - ~(s)) satisfying initial
conditions (V(O) = 1]', V(O) = \l. The mean 1]-upcrossing rate of Vet) I (V(O) = 1]',
V(O) - \l can be obtained from Rice's formula and has the express [6]
153
(12)
in which
t(u) - J~(a) da
S12 t
1\ - b t +--'-
S22,t
(,., - at)
2
St - Sn,t - S12,t l S 22,t (13)
and
a2., -1
a., a2., a.,
[atas 1 [::as at (0,0) <: - i.t( 0 ) 1
1 [
+ (t,O) at (t, 0) (0,0)
a., as
as (t,O) .,(t,O) (0,0) .,(0,0) ,.,' }l(0)
a., (t t)
at '
.,(t,t)
1-
a2., a., a2., a.,
1[ ::a.
-1
_ [ "". (, ,0' at (0,0)
" (',0, (0,0'
a.,
as (t 0)
' .,(t,O) as (0,0) .,(0,0) 1
2.,a a., (t,O)
as
x [ ,,,. (,,0)
(14)
a.,
at (t,O) .,(t,O) 1
The probability of first passage time T of V(t) relative to set (-~, ,.,),
,., > ,.,', can be approximated by
154
( -Jv(~;
t
provided that ~ is large and the difference between ~ and ~' is not small. The
approximation is based on the assumption that exceedances of threshold ~ follow an
inhomogeneous Poisson process of intensity v(~; t). It can be shown that the slope
of V(t) at time t of a ~-upcrossing follows probability (4)
g(\I~) = - - - - - (16)
(17)
where
Probabilities PF,j(T:mjl
Consider a spatial fiber strength configuration and a failure path jcJ
consistent with this configuration. Suppose the system's initial state
(Xn,j(O), Xn,j(O») is a random vector and let Fl(x) = P(Xn,j(O» < x) and F2Il(~lx)
= P(Xn,j(O) < ~ I Xn,j(O) - x). The probability of the residence period Yn,j con-
155
where
(20)
APPLICATIONS
process with covariance function (1 + wl~l) e-wl~l. w = 1.0. and mean EX*(~) -
5.0.
(1) {Rl. R2. R3J; (2) {Rl. R3. R2J; and (3) (R2. Rl. R3J.
0.6
PF( T 1= system failure
probability
1.0
0.6
1000 2000
time, T
systems under local and equal load sharing rules can significantly change in time
depending on fiber resistances. Figure 4 shows bounds on PF(~) based on the most
vulnerable spatial configuration and residence periods Yn,j. The first order
bounds are wide. Higher order lower bounds nearly coincide in this case with the
exact result because PF,2 = o.
v~~er bound
1.0
\Y3<T'
0.8
I. .~-1-=-1.-25-;-~2-=-3-.0-0-;-~-3-=-5.-0-0--'1 (0)
0.6
PF(T' =system failure
probability
lower bound
-S-PF,I(T)
1.0
0.8
0.6
0.4
0.4
( 0)
1.0 ~-5
0.8
0.6
0.4
0.2 ( b)
0
0 40 80 120 160 200
time, T
1.0 ).'5
).. I
0.6
0.4
(e)
1.0
~ (0)
0.8
0.6
' - P (T) =system failure
probability
0.4
0.2
lower bound
1PF• 1(T)
0
0 40 80 120 160 200
time, T
1.0
I >.= 51 ( b)
0.8
PF (T) =system failure
probability
0.6
lower bound
0.2
t PF, I (T)
40 80 120 160 200
time, or
CONCLUSIONS
A general method was developed for calculating failure probabilities and bounds
on these probabilities for Daniels systems with brittle fibers of random resis-
tances that are subject to quasistatic and dynamic loads. The analysis involves a
simple local load sharing rule for representing stress concentration and an ele-
mentary failure criterion for fibers. Concepts of extreme theory of stochastic
process and generalized Slepian models were used in the developments of the paper.
160
REFERENCES
1. Fujita, M., Grigoriu, M., and Rackwitz, R., "Reliability of Daniels Systems
Oscillators Including Dynamic Redistribution," Probabilistic Methods in Civil
Engineering, ed. P. D. Spanos, ASCE, NY, 1988, pp. 424-427.
5. Larson, H. J., Introduction to the Theory of Statistics, John Viley & Sons,
Inc., New York, 1973.
6. Leadbetter, M. R., Lindgren, G., and Rootzen, H., Extremes and Related
Properties of Random Sequences and Processes, Springer-Verlag, New York, 1983.
9. Phoenix, S. L., and Smith, R. L., "The Strength Distribution and Size Effect in
a Prototypical Model for Percolation Breakdown in Materials," Technical Report
43, Mathematical Science Institute, Cornell University, Ithaca, NY, April 1989.
10. Taylor, H. M., "The Time to Failure of Fiber Bundles Subject to Random Loads,"
Advances in Applied Probability, Vol. II, 1979, pp. 527-541.
12. Ven, Y. K., and Chen, H.-C., "On Fast Integration for Time Variant Structural
Reliability," Probabilistic Engineering Mechanics, Vol. 2, No.3, 1987, pp.
156-162.
RELIABILITY ANALYSIS OF ELASTO-PLASTIC DYNAMIC PROBLEMS
Toshiaki Hisada*, Hirohisa Noguchi*, Osamu Murayama* & Armen Der Kiureghian**
*Research Center for Advanced Science and Technology
University of Tokyo, Japan
**Department of Civil Engineering
University of California, Berkeley, USA
ABSTRACT
1. INTRODUCTION
K dU =dF (1)
K=KL+KNL (2)
(3)
and an iterative scheme is applied to each load step Fj. If each load
step is small, the load path effect can be taken to be negligible during
each load step Fi. In the same way it may be assumed that the equation
or
(5)
where .6.Ui is the total variation of Uj, given by .6.Ui =I5U i + ( higher order
terms ). From Eq. (5) we can derive the following equations, neglecting
the second and higher order terms.
(6)
or
(7 )
where
(8)
r
and stiffness matrices.
Qi = ±(~iJl)ill-l
1=1
C dO +
)U/_l
K dU ) (10)
tant, but the damping term might be replaced by a single integral from
Uo to Ui due to it nature. The implicit time integration method requires
iteration scheme such as follows.
i
MtJ~) + C~U~k) + K~U~) = L Fl- Q~k-l) (11)
1=1
U~) = U~-I)
I I
+ au~)
I (12)
U~) = U~-I)
II.'=!.
+ AU~)
1 (13)
where au~k), ~U~k) ,~tJ~k) are kth updating increments converging to zero _I t
is noted that ~ does not mean perturbation components.
When there is a perturbation in the system, the equilibrium equa-
tion is given as follows instead of Eq(9) _
i
Ma (tJ i + Ll tJd + Qai = L Fal (15)
1=1
where LltJ i is the total variation of the acceleration due to the pertur-
bation of the system which is denoted by subscript a. Based on Eq(10)
the perturbed internal force vector Qai is given by
. 1 lUI+AUI lUI+AUI
= ~( UI.I+AUI.I C a dU + UI.I+AUI.I Ka dU )
(17)
l
1=1 'Ii,... + au,.. UI-I + aul-l
Ui lUi i
(19)
(20)
(22)
Substituting Eqs. (21) and (22) into Eq. (17), we have the following equa-
tion to solve for the variation of the acceleration.
(M + ~t C + ~ ~ t2 K) OUi
= - SRi - C (SU i_l + t~t SU i_l )
- K{ SUi-l + SUi-l ~t + (t-~) At2 OUi_d (23)
the same as that of the original system, the variation SUi is easily ob-
tained after the completion of Newton Raphson iteration for the original
system. Then SUi and SUi are simply calculated by Eqs. (21) and (22).
Figure 3 shows the mean response (the response when random vari-
ables are equal to their mean values) and the design point responses
(the responses for the most likely values of the random variables that
give rise to the failure event, at which point first or second-order
approximations to the limit-state surface are constructed) of the
displacement at node 2 for some Uo's in Case 1. Figure 4 shows similar
results of the stress response of element 1 in Case 3. Figure 5 shows
the sensitivity history of the stress response of element 1 with respect
to the yield stress in Case 3. The sensitivity histories as computed by
the present method are compared with those obtained by direct finite
difference method (FDM), and good agreement is seen. In fact the plots
of sensitivity by the direct FDM are practically identical with those in
Figure 5.
Figures 6 to 11 show the changes of reliability index and probabil-
ity of failure against the failure thresholds, Uo or ao, and the proba-
bility densities of each maximum values in the above three cases. These
values are evaluated by FORM and (point-fitting) SORM.
In these figures, results of FORM and SORM are very similar. And
the reliability indices are almost linear against threshold value in
Figures 8,11 and 14. These results suggest that the performance
functions and limit-state functions are almost linear in the standard
normal spaces.
Table 1 summarizes the scaled sensitivities, 0 and ~. 0 is defined
as Sa~/a~ and ~ as Sa~/as, where ~ and S are the mean value and standard
deviation of each random variable, respectively. 0 provides a measure of
the importance of the central value of each variable, whereas ~ provides
a measure of the importance of the uncertainty in each variable. It is
seen that the sensitivities with respect to the Young's moduli are
unimportant. On the other hand, the yield stresses of element 1 and 3 in
Case 1 and element 7 and 8 in Case 2 are most sensitive. This is easily
understood because these elements are most stressed. But in Case 3, it
is interesting to see that the most important variable is the yield
stress in element 7, but the yield stress in element 8 is not so
important. This may not be obvious from an intuitive standpoint.
In this study, only component reliability analyses are demon-
strated. However, even if failure is defined as the union of some
events, e.g., for U01, U02 and ao, a simple series system analysis by
167
5. CONCLUSIONS
REFERENCES
[1] Ryu, Y.S., Haririan, M., Wu, C.C. and Arora, J.S., " Structural
Design Sensitivity Analysis of Nonlinear Response," Computers and
Structures, Vol. 21, No. 1/2, pp. 245, 1985.
168
[2] Liu, W-K., Belytschko, T. and Mani, A., " A Computational Method for
the Determination of the Probabilistic Distribution of the Dynamic
Response of Structures," ASME PVP-98-5, pp. 243, 1985.
[3] Liu, W-K., Belytschko, T. and Mani, A., " Probabilistic Finite
Elements for Nonlinear Structural Dynamics," Computer Methods in Applied
Mechanics and Engineering, Vol.56, pp. 61, 1986.
[4] Liu, P-L, Der Kiureghian, A.," Reliability Assessment of
Geometrically Nonlinear Structures," Proc. ASCE EMD/GTD/STD Specialty
Conference on Probabilistic Methods, pp. 164, 1988.
[5] Liu, P-L, Der Kiureghian, A.," Finite Element Reliability of Two
Dimensional Continua with Geometrical Nonlinearity," Proc. 5th Interna-
tional Conference on Structural Safety and Reliability, pp. 1081, 1989.
[6] Hisada, T.," Sensitivity Analysis of Nonlinear FEM," Proc. ASCE
EMD/GTD/STD Specialty Conference on Probabilistic Methods, pp.160., 1988
[7] Hisada, T.,Noguchi, H., " Sensitivity Analysis for Nonlinear
Stochastic FEM in 3D Elasto-Plastic Problems," ASME PVP Vol. 157, Book
No.H00472, pp. 175, 1989.
[8] Hisada, T., Noguchi, H.," Development of a Nonlinear Stochastic FEM
and its application," Proc. 5th International Conference on Structural
Safety and Reliability, pp. 1097, 1989.
[9] Liu, P-L, Hong-Zong Lin, Der Kiureghian, A., Calrel User Manual,
UCB/SEMM-89/18, 1989.
169
1
i
oUI K,PU-g
T!
2 K(lh)Olh
t F2
l'
i
UZ
Ur+(ilh
K,PU -Pz
1
Fl
~ 2' K(Uz)Olh
2" K(U3)OU3
r- -+-360" 360"--J
T
360" t [sec]
1 Case1:
Loading Condition
P=100.0x1~ [lb], P2 =O.O[lb]
Case2,3: P=175.0x1~ [lb], PI =O.O[lb]
CJ
E 30.0x10 6 [lb/in' ] (mean value)
ET 30.0x10 4 [lb/in' ]
A 6.0 [in2 ]
p 0.30[lb/in 3 ]
100 . 0
80.0
§ 60.0
'"
::J
40 . 0
20 . 0
0 .0
0.00 0 . 10 0 . 20 0 . 30 0.40 0 . 50
time(sec)
30000.0 ~---------------------------------,
mean
= 20500.0
(j()
~ 20000.0 00=22000.0
'"
~
6 10000.0
0 . 0 ~~~-.--~--.---~-.--~--.-~~-1
0.00 0 .10 0 .20 0.30 0 . 40 0.50
time(sec)
4.0 ~---------------------------------,
2 .0
-2.0
-4 . 0
-6.0 +---~~--~--~~~~--~--~~---1
6.0
cc..
4.0
.8oS
.... 2.0
~
:::I 0.0
FORM
SORM
~
-2.0
20.0 40.0 60.0 80.0 100.0 120.0
Displacement Threshold Uo[in]
10 0
it 10-1 FORM
] 10-2 • SORM
·a 10- 3
....
~
0 10- 4
.~ 10- 5
g = Uo - max[U2(t)]
:=
~ 10-6 t=O-O.48[sec)
.c
J: 10- 7
10-8
0.08
. . ...=
g =Uo - max[U2(t)]
Os
.... 0)
0.06
t=O-O.48[sec)
.;1
=-
~
O)p..
O·~ 0.04
~O
:;is
~I
.c . 0.02
°
It~
0.00
20.0 40.0 60.0 80.0 100.0 120.0
Maximum Displacement max[U2(t)] [in]
t=O-O.48[sec)
Figure 8 Probability Density of Maximum Displacement
172
8.0
co.. 6.0
><
~.... 4.0 g =00 - max[<Jl(t)]
1=O-O.48[sec]
:3 2.0
~ FORM
~ 0.0 • SORM
-2.04--r~--r-~-r~--~~~~~~~~~
10 0
10- 1
it FORM
]
10- 2
10- 3
• SORM
.&1 g =00 - max[<Jl(t)]
10- 4
....0
~
10- 5
t=O-O.48[sec]
~
10- 6
10- 7
~
.c 10- 8
&: 10- 9
1cr1
18000 19000 20000 21000 22000 23000 24000 25000
Stress Threshold 00 [lb/in2 ]
Figure 10 Probability of Failure vs Stress Threshold
6. Ox1 0 4,-________________________________-,
-4
'c; 5. Ox10
~~
./il ~ 4.0x10
-4 g = 00 - max[<Jl(t)]
UfIl t=O-O.48[sec]
oe 3.0x10 4
f·12.0X10 4
.g
et
::s 1.0x10
-4
0.0i--r-,--r-,-~~--~~~~~~r_~~
1. Introduction
p
~ AiY(k-i)=BOX(k) (2•1)
i=O
P
Y(k)=-~ AiY(k-i)+BOX(k) (2.2)
i=l
The detail of this equation is;
where X(k) = [ X1 (k) X2 (k) •....• Xm(k) ]T ;( mx1 ) vector, and the
components Xi(k) are mutually independent with zero means and the
variances E[X i 2 (k)] = 1. In order that Eq.(2.2) is ready to use for
the Kalman filtering procedure of estimating the unknown coefficient
matrices Ai and BO' Eq.(2.2) is converted to
yT(k_i)
,,--- - -l
ali
"-
[" "- 0
p
Y(k)= - ~
I "- , yT(k-i) "-
"-
a2i +BOX(k) (2.4)
i=l I "- "-
I "- "- I
"- "- I
.yT(k_i)
I "- "-
I
L __
0 "- , '-.I
- --::.. ami
P
L AiE[Y(k-i)yT(k») (2.5 )
i=O
Z(k)=[all a21 ••• aml/a12 a22 .•. am2/ .... /alp a2p ... ~p)T; at k
=[all a21··· a ml/ a 12 a22··· a m2/····/ a lp a2p···amp )T+<'i(k-l );at k-l
(3.1 )
Z(k)+E(k)
(3.2 )
176
P /\ T
Identification of B O .- Let C = L A.E[Y(k-i)Y(k) ] and a lower
iel 1
triangular matrix So can be obtained by the Cholesky decomposition of
C as follows.
1)11 ~----,
" "I
\I
bm1 b m2 ... b mm
where the values identified for Ai at the previous procedure and
observation data Y(k-i) are used in order to compose the matrix C.
(3.4)
(3.5)
4. Numerical Examples
•P7
• P8 •
Fig.2 Layout of Complementary Observation System
CHIBA EXPeRIMENT
* tS::;I::JUKURI
~:) COAST
BOSO •
PENIN UlA
6
u
u u
u
<:3 <:3
0.00 7.80 15.60 23.40 31.20 39.00 0.00 7·80 15.60 23.40 31.20 39·00
T I ME (SEC) TIME (SEC)
~:l~ ~:t--~-.
g
~~~~~~~~~
2
7~'~~~~~~~
1.10 "."0 TIME23.40(SEC) 15.60 a.~D
TlHE ISEC)
0.00 15.60 2]. 40
TIME ISEC)
lI.~O 3t.00
0.00 1.80 15.60 23. ~o 31.20 39.00 0.00 15.60 23.40 31.20 39.00 0.00 IS.60 23.40 31.20 39.00
TIME [SECJ TIME [SEC) TIME ISEC)
9 ' ,
15.'0 23. to 1I.;;O 39.00 1.80 15.60 23.40 31.20 39.00 0.00 15.00 23.40 31.20 39.00
TIMe ISEC) TIME (SEC) TIME tSECI
~o ~d
g ~ co
g . PARAMETER bll
~:I~
Slo
, g
~~~~~~~~-
0.00 15.60 23.40
TiME ISEC}
'~:I~AcNd
..Qo
'g
~~~~~~~~~~
0.00 1.80 15.10 23.40 H.20 31.00
TIME ISEC)
waves are correlated, the values of '8: ij and 'b ij for i"'j are observed
not to be zero.
Application 2.- Six acceleration records which were observed at
borehole CO (GL.-l m) during different earthquakes are selected (Fig.6)
and each record is used to identify the univariate and one dimensional
AR(3) model. These records may be classified into a same category,
since the epicenter of each earthquake is located in the vicinity of
Kujukuri Coast of Boso Peninsula. The identified model parameters are
shown in Fig.7 where the tendency of parameters of ~11 (1) and ~11 (2)
are almost identical, however, parameters ~O manifest different
evolutionary trends. This latter observation is clearly attributed to
different evolutionary amplitude trends of each earthquake.
5. Concluding Remarks
6. Acknowledgement
.0
~o
,.;
)E)EARRAY DATA
MAX.= 213.1
o
o
)E)(ARRAY DATA )()E
MAX. = 22.5
'"
-'
<
<.:>0
_0
ci
u
U
<g
,.;
"i 0.00 4.60 9.60 14.40 19.20
0.00 7.60 15.60 23.40 31.20 39.00 24.00
Tl ME (SEC) TIME (SEC)
::11~· ~: I· '~·fl~~.to.
u .
<0
o
o
"'.', j V' p, ...
"i ~
0.00 7.60 15.60 23.40 31.20 39.00 0.00 3.40 6.60 10.20 13.60 17.00
TIME (SEC) TIME (SEC)
o o
o o
~ ~
0.00 5.60 11.20 16.80 22.40 26.00 0.00 14.60 22.20 29.60 37.00
TIME (SEC) T I ME (SEC)
:l+--_ _
J:jL
a.oo IS.IO n.to
TInE ISECI
~:L==
0.00 IS."
rUlE
n.lo
I~~.CI
11.10
TiME
I., . .
ISECI
:lc=----v--
M
~~~
:~
;;~t-I----
J:l ~~~~~~~~~~
IS.'O n.lo
15.10 U.IO
ilNE ISEC!
cl,l
~--~~--~
0.00 II.lO 1&.,0
flME ISEC)
fiNE ISEC)
P,'RAMETER (LU131 M PARAMETER a.1I131.
PARAMETE~ (2.:,,131 M
:1.4------ -
~:J\'~
1;~_~
~,
IS. ,a H.tO
15.'0
TINE
lJ.IO
(SECI
nnE ISECI
PARAMETER 170 PARAMETER 170
:1 i~
~.~
IS.'O u .• 1t
15.&0 n .• o 11.20 1'.10
TINE ISECI TlnE ISEC) TIME ISECI
:+---1_ _
~~
~1 +-j- - - - - -
J:l~ dgr,~
'.6.1 \4 •• 0 ••• 0 10.10 It.'O U"O
TINE 15EC! TIME 15ECI i[Me: {SECI
PARAMETER (L'il21.
:V=~~
~: j
~~~~----
'.&0 It.la 0.00 '.'0 10.U 11.'0 U.l!t
nNE (SECI TIME ISEC) flME {SEC!
:l'L' _ _~ :1
g'rl~w-vJ'''''
<!g
..
~~-~-.~-.~-,--"-~~~lt.OO! ..
v
~:j~~"~
•.• 0 10.10 11.10 U.IO
TINE ISEC) TinE ISEC) r [ME 15ECI
:~
....
g
~LrL
~: J
1 •• 0 .#_ ••. 10 11 •• 0 .... ·.10.10 11.'0 lZ.10
TINE 15ECI TitlE (SECI TiME ISECI
7. References
1.Samaras, E., Shinozuka, M., and Tsurui, A., " ARMA Representation
of Random Processes ", Journal of Engineering Mechanics, ASCE,
Vol.l11, No.3, March, 1985.
2.Deodatis, G., and Shinozuka, M., " An Auto-Regressive Model for
Non-Stationary Stochastic Processes", Stochastic Mechanics Vol.II,
Columbia Univ., April, 1987, pp.227-258.
3.Nagamura, T., Deodatis, G., and Shinozuka, M., " An ARMA Model for
Two-Demensional Processes ", Journal of Engineering Mechanics,
ASCE, Vol.113, No.2, Feb., 1987.
4.Hoshiya, M., Ishii, K., and Nagata, S., " Recursive Covariance of
structural Responses ", Journal of Engineering Mechanics, ASCE,
Vol.l1 0, No.1 2 , December, 1984.
5 .Hoshiya, M., and Shibusa wa, S., " Response Covariance to Multiple
Exciations ", Journal of Engineering Mechanics, ASCE, Vol.112,
No.4, April, 1986.
6.Hoshiya, M., Naruyama, M., and Kurita, M., " Autoregressive Model
of Spatially Propagating Earthquake Ground Motion ", Proc. of
Probabilistic Methods in Civil Engineering, ASCE, Blacksburg,
Virginia, May, 1988.
7.Nau, R. F., Oliver R. M., and Pister, K. S., " Simulating and
Analyzing Artificial Nonstationary Earthquake Ground Motions "
Bulltin of Seism. Soc. of Am., Vol.72, No.2, April, 1982.
8.Takahashi, A., and Kawakami, H., " Estimation of Wave Propagation
by AR model", Proc. of the 43rd Annual Meeting, JSCE, October,
1988 (in Japanese).
9.Gersch, W., Taoka, G. T., and Liu, R., " Structural System
Parameter Estimation by Two-Stage Least-Squares Method ", Journal
of Engineering Mechanics, ASCE, EM5, October, 1976.
10.Hoshiya, M., and Saito, M.," Structural Identification by
Extended Kalman Filter ", Journal of Engineering Mechanics, ASCE,
Vol.ll0, No.12, December, 1984.
11.Hoshiya, M., and Maruyama, 0., " Identification of a Running Load
and Beam System ", Journal of Engineering Mechanics, ASCE, Vol.113,
No.6, June, 1987.
12. Hoshiya, M., " Application of the Extended Kalman Filter-WGI
Method in Dynamic System Identification ", Stochastic Structural
Dynamics-Progress in Theory and Applications, Elsevier Applied
Science, pp.l03-124, 1988.
13. Katayama, T., Yamazaki, F., Nagata, S., Lu, L. and Turker, T,
"Development of Strong Motion Database for the Chiba Seismometer
Array", Report of Earthquake Disaster Mitigation Engineering
Laboratory Report No.90-1 (14), Institute of Industrial Science
University of Tokyo.
THE EFFECT OF A NON-LINEAR WAVE FORCE MODEL
ON THE RELIABILITY OF A JACK-UP PLATFORM
Abstract
The reliability against failure due to excessive stresses in the legs of a jack-up drilling
rig is investigated.
A 3-D linear Timoshenko beam model of the jack-up is used to determine a non-linear
relation between the wave height and the maximum leg stresses. The non-linearities
follow from the use of Morison's formula, Stoke's fifth order wave theory and integration
of wave forces to the actual water surface. Loads due to current, wind and gravity are
included, partly through probabilistic models. Uncertainties in structural parameters such
as the drag coefficient and the yield stress of the leg material are taken into account.
With this model, the probability of failure due to excessive leg stresses is determined
using first and second order reliability methods. These results are compared to the safety
factors according to established codes of practice for allowable stresses. A similar analysis
is given for the safety against overturning. It is found that the established practice for
site approval of jack-up platforms for these two failure modes leads to probabilities of
failure which differ by orders of magnitude.
1. Introduction
Self-€levating mobile offshore drilling units (jack-ups) are being used for water depth up
to about 100 m. The current trend is to design jack-up platforms for operation in
deeper, more exposed areas such as for instance in the northern part of the North Sea.
A jack-up platform is designed for a certain set of environmental conditions but in
many cases these conditions are incompatible with those found on the specific location
where the platform is scheduled to operate. Therefore, a site-specific assessment of the
jack-up platform is normally performed. Different national and international requirements
exist today. However, common to all these requirements are that they are based on
deterministic procedures, where the functional loads and environmental parameters are
chosen such that conservative results are to be expected. Based on these load parameters,
a structural response analysis is performed.
The location approval usually includes a check of stress levels in the legs and pinions, a
calculation of the necessary preload capacity and holding power of the jacking mechanism
and a check of the overturning stability.
The acceptance criteria are in general based on Load Resistance Factor Design (LRFD)
principles where the aim is to ensure consistency in the acceptance checks for various
failure modes and uniformity in risk levels from one location assessment to another.
It is the purpose of the present paper to relate the deterministic safety factors obtained
from these standard procedures with corresponding probabilities of failure in order to
investigate whether such uniformity in risk levels indeed exists with the present practice.
The failure modes considered are compressive chord stress failure and failure due to
overturning. In [1] an analysis of the overturning stability was performed using some
simplifying assumptions in connection with a direct integration of the probability of
186
failure. In the present paper the FORM/SORM methods, see for instance [2], are applied
using a statistical description of all pertinent parameters.
The results are presented in a diagram showing for an example platform the relations
between the deterministic safety factors and the corresponding probabilities of failure or
safety indices.
The paper only deals with extreme loadings and no account is given to the possibility of
failure due to fatigue.
connected to the hull and the jack houses through hinges at the lower and upper guides
and through rotational and vertical springs modelling the flexibility of the elevating
systems and the jack houses.
The hull weight is applied to the legs at the positions of the elevating systems. The
hydrodynamic loads on the legs are calculated using Morison's equation in connection
with an equivalent leg model, [3], without any allowance for shielding or interference
effects. The elevation, velocity and acceleration profiles for the waves are calculated using
Stoke's 5th order wave theory. The current velocity is scaled to yield a constant mass
flow, [3]' before it is added to the ,wave particle velocity. The only vertical component
of the wave loading considered is that due to buoyancy variation effects.
The wind loading is calculated for leg sections above the hull using the same equivalent
leg model as for the wave loading. The wind load on the hull is determined according
to a projected area approach.
Coherent directions of the waves, current and wind are assumed. The design wave height
for a specific location is normally derived as the most probable largest wave height in a
characteristic, three hours short term sea condition. The associated wave period is deter-
mined from an average relation, [3]. The current velocity is specified with a piece-wise
linear variation between the sea bed and the still water surface. The wind velocity is
defined as the one-minute mean 10 m above the still water surface. Its variation with
height is taken from the Danish Code [6].
For the design calculations conservative functional loads should be chosen, typically
representing mean values less two standard deviations. This applies to the weight of the
hull and the location of its center-of-gravity.
A series of static calculations using the structural and load modelling described above is
performed in order to determine the directions and positions of the Stoke's 5th order
wave which produce the highest stresses or lowest safety factors. In these calculations,
the corrections due to Euler amplification, deck sway and inertia forces are included in
an approximative manner, [3]. However, a comparison with full non-linear stochastic time
simulation calculations, [5], shows reasonable good agreement for the extreme loadings on
an example platform. For instance, the extreme leg reaction obtained by the design pro-
cedure was found in [5] to be within 0.97 to 1.12 of the results from the stochastic time
simulation procedure. This indicates that the design procedure can be expected to yield
reasonable results. Wave crest
Head waves
~--
H&F*"--r ~--
~::-
Aft
(1)
where Mo and Mw are the overturning moments due to waves (including current) and
wind, respectively. The leg sectional modulus for the forward chord is denoted by Wand
the leg sectional area by A. The axial leg load is P. The deck sway is denoted by 8
and L is the distance between the aft and forward legs. It should be mentioned that
formula (1) is obtained assuming simply supported boundary conditions at the interface
between the spud cans and the sea bed. Contributions due to dynamic amplification are
included in the moment Mo. Euler amplification is taken into account by reducing the
leg bending stiffness by a factor (I-PIPE)' where P E is the Euler load of the leg.
Finally, it is seen that the last term in formula (1) gives the contribution to the chord
stress from the deck sway 8.
For each of the aft legs, the axial load becomes
P = ~ [1 -~] + mg (2)
where M is the mass of the hull and d is the distance from the center of the aft legs
to the center-of-gravity of the hull, see Figure 2. The mass of the part of the leg
situated above the lower guide is denoted by m and g is the acceleration of gravity.
The wind-induced overturning moment Mw is given as
Mw = 2"1 Pa Cw Aw Lw V2 (3)
where Pa is the mass density of air, Cw is an equivalent drag coefficient and Aw the
projected wind area for wind coming head on. The length Lw is measured vertically from
the spud cans to the center-of- action for the wind forces on the hull. Finally, V is a
characteristic wind velocity.
The overturning moment Mo due to waves and current has previously been found, (1], to
follow quite accurately a cubic polynomium in the wave height H of the Stoke s 5th
order wave
(4)
Here CD is the drag coeffiCient for the individual bracing members and ao, aJ, a2 and a3
are coefficients depending on the current profile and to a minor extend on the ratio
CM/C D . It is noted that the non-linear terms in equation (4) are due to the use of
Stoke's 5th order wave theory, non-linear drag forces and integration of the wave forces
to the instantaneous position of the water surface at each leg. The wave positions used
>
in deriving equation (4) are those producin~ the highest overturning moment Mo.
Dynamic effects are included in equation (4) using a simple one-degree-of-freedom
approach.
The contribution to the stress Urn from the deck sway 8 is small. Therefore, the deck
sway is approximated by
(5)
where the coefficients bo and bl depend on the current profile. Note that the zero
crossing reriod does not enter the equations (4)-(5). This is because a one-to-one
relation, [3], between the wave height and wave period is assumed in the deterministic
analysis leading to these equations. Since the object of the present study is a survival
analysis, where the response almost exclusively depends on wave heights and not wave
periods, this model is reasonable.
189
Substitution of the equations (2)-(5) into equation (1) yields Um = um(H,V) provided the
current profile, the drag coefficient CD and the structural layout are given. For a specific
site, data regarding the design wave height H and wind velocity V are obtained from
hindcasting or other statistical sources.
According to the Danish Offshore Code [4], the design response should be calculated
using a partial load factor of 1.3 on the environmental loads and thus on the over-
turning moments Mo and Mw and on the contribution due to the deck sway. The partial
load factor for the functional load P should be taken to be 1.0. Finally, a drag coeffi-
cient CD = 0.7 must be used.
The design stress u~ is to be compared with the critical compressive design chord stress
u~, calculated taking into account yielding as well as buckling effects. As design yield
stress the 5% yield stress fractile divided by a member resistance factor 1.21 is used,
whereas the modulus of elasticity is to be divided by the factor 1.48. The code require-
ment is u~ ~ u~ for an acceptable design. The ratio
(6)
is denoted the safety factor and should then be greater than or equal to one.
With regard to failure due to overturning or, more precisely, zero soil reaction on the
forward leg, the moment equilibrium criterion
Mg( d-b) + m~(L-b) ~ Mo+ Mw (7)
(8)
3. Probabilistic description.
Although all parameters entering the equations (1)-(5) can be treated as statistical
variables, only the most important ones are considered here. The following parameters are
given a normal distribution: hull mass M, distance to c.o.g. d, wind area Aw and drag
coefficient CD. The critical stress uc, evaluated without partial resistance factors, is taken
to be lognormally distributed.
An exponential distribution is assigned to the one year maximum wind velocity V
squared
in accordance with the Danish Code [6]. The reference velocity V50 is the wind velocity
with a return period of 50 years.
190
The current profile is assumed. fixed. The wave heights are assumed Rayleigh distributed
conditional on the significant wave height Hs. The distribution function for the largest
wave height H in a sea state is
P(H < x IHs) = exp(-Nexp(-2(x/Hs)2)) (10)
where N is the average number of peaks in the short term sea state considered. Here it
is assumed that the duration of each sea state is three hours.
A Wei bull distribution is used for the significant wave heights Hs such that the dis-
tribution for the maximum significant wave height H8,m during a one year period
becomes
(11 )
where the location and scale parameters hw and p are determined for the actual location
considered and where Nl = 1 year/3hours = 2920.
Finally, a positive correlation between the wind velocity V and the significant wave
height Hs is applied.
Thereby, the probabilistic model is defined and a standard FORM/SORM analysis, [2],
can be performed with the limit state functions
(12)
and
Mg(d-b) + m~(L-o) - (Mo + Mw) ~ 0 (13)
Two types of analysis are performed. In the first fixed values of significant wave heights
Hs and wind velocity V are used. For each significant wave height the probabilities of
failure and corresponding safety indices are compared to the safety factors (6) and (8).
In the second analysis, a specific site is chosen for a one year operation.
In addition to the comparison between probabilities of failure and safety factors also the
sensitivities of the safety indices to the parameters entering the equations (1)-(11) are
given.
4. Numerical Example.
The pertinent structural parameters for the jack-up platform considered are W =
0.906 m3, A = 0.293 m2 and L = 39 m. The coefficients in equation (4) for the
overturning moment Mo for this jack-up platform are, [1], ao = 11.0 MNm, al = 13.1
MNm/m, a2 = 0.671 MNm/m2 and a3 = 0.0459 MNm/m 3 whereas the coefficients in the
deck sway equation (5) become bo = 0.21 m, b1 = 0.045 m/m and b2 = 0.002m/MNm.
In deriving these coefficients a current profile with a surface current of 0.8 mls and
bottom current of 0.2 m/s has been used. The water depth is 90 m. For more details,
see [1].
The functional loads are given by a deterministic lightweight of the hull equal to 5200 t,
a variable load of 1800 t and a mass of the leg structure above lower guide m = 24 t.
The total mass me of one leg is 250 t. For the limit state analysis concerning chord
stress failure the full variable load is applied. The mean value of the hull mass is then
M = 7000 t and the standard deviation is taken to be 300 t. For the overturning
analysis, it is conservatively assumed, that only half the variable load is present, thus
M = 6100 t with a standard deviation of 150 t. The mean value of the center-of-
gravity distance d = 11 m and the standard deviation is taken to be 1 m.
191
The wind loading on the hull is given by equation (3) with CwAw = 1200 m2, and
standard deviation 60 m2. The arm Lw of the wind moment is taken as fixed, Lw = 113
m. The 50-year maximum wind velocity is specified to 42 m/s.
For the drag coefficient CD measurements presented in [7] yield a mean value of 0.61
with a coefficient-of-variation of 24%. This result will be used here.
The number N of wave heights in a three hours sea state is taken as
N = 3 hours I (4.5 sec + 0.5 sec/m· Hs)
corresponding to numbers slightly higher than 1000.
For the example jack-up, the critical compressive chord stress (Jc is rather close to the
yield stress of the chord material due to a short bay length. With a specified yield
stress (5% fractile) of 500 MN/m2, the mean value of the critical stress becomes 500
MN/m2, with a standard deviation of 25 MN/m2 according to [41. Local bending stresses
in the chords from contact forces at the lower guides are small (or the example platform
because of a very stiff elevating system.
Figure 3 shows the results for a three hours exposure to long-crested, short term sea
states as function of the significant wave height. A constant wind velocity of V=42 m/s
is used for all sea states in Figure 3.
P.S
---------
So ~----
o~--------~----------~--------~
5 6 7 Hs< m) 8
Figure 3 Safety factors (S) and corresponding ,8-values and probabilities of failure (Pf)
for three hours operation in stationary sea states (Hs), V = 42 m/s.
192
The deterministic safety factors based on established practice, 8 (J and 8 "/' are given by
the equations (6) and (8). For the present jack-up platform the design wave height is
13.4 m corresponding to a significant wave height Hs = 7.0 m. It is seen, that the
criteria for both overturning stability and chord stress failure are nearly satisfied (8,,/ =
0.96 and 8 (J = 0.99) for Hs = 7.0 m. Thus, in the deterministic design based on the
codes applied here, the safety against failure in these two modes are nearly the same.
The probabilistic analysis gives quite different results. In complete agreement with the
findings in [1], the probability of failure due to overturning is relatively high, about 10-2 ,
for the design sea state. Contrary to that, the probability of compressive chord stress
failure is very low. For Hs=7.0 m this probability of failure is Pf = 1.4.10- 4 with a
corresponding safety index f3 = _¢>-l(Pf) = 3.62. The results shown in Figure 3 are from
the 80RM analysis but only small differences were found when compared to the FORM
results.
The explanation for this difference between probability of failure levels for the two failure
modes is probably to be found in the different design philosophies applied when deriving
the safety factors 8 and 8 . The safety factor 8 for the stress levels, equation (6),
(J "/ (J
has its basis in civil engineering codes, where there is a tradition for a high degree of
structural safety. This is reflected in the high partial load and resistance factors used.
On the other hand, the safety factor 8,,/ against overturning, equation (8), is unique for
jack-up platforms and does not contain any partial resistance factor. The reason for not
choosing a partial resistance factor is probably that the limit state criterion (7) is
somewhat conservative as no account is given to support excentricity at the sea bottom.
Due to the large diameters of the spud cans, the center-of-action of the vertical reac-
tions from the sea bottom will be situated aft the leg center for head waves. Assuming
a triangular support pressure on the spud can, this offset e will for the example plat-
form be 2 m. In order to evaluate the effect of this offset on the probability of failure,
the FORM/80RM analysis is repeated using the limit state functions (12) and (13) with
the desk sway {j replaced by {}-e. The offset e is taken as normal distributed with mean
value 2 m and standard deviation 1 m. The result is included in Figure 3 and it is
seen, that the inclusion of the offset lowers the probability of failure significantly.
However, the difference between the probabilities of failure for the two failure modes are
nearly unaltered and still about two orders of magnitude.
Finally, a one-year operation in the Baltic 8ea is considered. Based on wave statistics,
[8], the parameters hw and p in the Wei bull distribution, equation (ll), are determined
to be hw = 1.2 m and p = 1.29.
A variation of the wave direction ± 30° from head sea only reduces the chord stresses
by 5%. Therefore, long-{;rested waves are assumed in the present study. In [1], it is
found that the probability of failure due to overturning is reduced by a factor of about
2.5 when short-{;rested waves are used.
The most probable largest wave height in the Baltic 8ea in one year is 13.6 m, which
is very close to the design wave height 13.4 m. The safety factor against compressive
chord stress failure, equation (6), is 8(J = 0.98, whereas the safety against overturning
failure, equation (8), becomes 8,,/ = 0.95.
From the FORM/80RM analysis the following results are obtained for the four cases
considered:
A.l Chord stress failure (e 0): Pf 6.4 '10-4, f3 3.22
A.2 Chord stress failure (e 2m, C.O.V.= 0.5): Pf 1.1·10-4, f3 3.69
193
The same difference between the probabilities of failure as found in the short term sea
states are observed here.
The importance factors and design points from the FORM analysis for the stochastic
variables are shown in Table 1, whereas Table 2 contains a sensitivity analysis for all
parameters used. The notation A.I, A.2, B.I or B.2 refers to the above-mentioned cases.
Table 1: Importance factors in pct. for the statistical parameters. Baltic Sea, one year
operation. Case A.2: chord stress failure, Case B.2: overturning failure. Leg support offset
assumed.
From Table 1 it is seen, that the major uncertainty is due to the significant wave
height Hs and that the only two other statistical parameters of importance are the wave
height conditioned on Hs and the drag coefficient CD. All other parameters can be taken
as fixed at their mean values. However, as seen from Table 2, the safety index /3
depends of course on these mean values.
Table 2 can be used to assess the sensitivity of the safety index /3 for all parameters.
The probability of chord stress failure is especially sensitive to changes in the drag
coefficient CD' the critical stress (Jc and the leg sectional modulus W. Minor changes in
the response coefficients 3.0, at, a2, a3, b o, b i and b2 are not important. For the
probability of failure due to overturning the most important parameters are the hull
mass M, the center-of-gravity distance d and the drag coefficient CD. Of course, for
both failure modes a change in the significant wave parameters hw and p are of
significant importance. However, a direct interpretation of the results given in Table 2 is
not easy because p and hw are correlated.
194
5. Conclusions
For a typical jack-up platform probabilities of failure have been calculated by the
FORM/SORM methods. The failure modes considered are collapse of a chord in com-
pression and overturning of the platform.
The structural modelling takes into account non-linear effects due to the use of a
Stoke's 5th order wave theory, Morison's equation and integration of the wave loads to
the instantaneous wave surface. In addition, the destabilizing effect of the hull weight on
the legs, the deck sway and the dynamically excited inertia forces are taken into
account.
195
A sensitivity analysis shows that the uncertaincy in the wave height estimation is the
main parameter in the calculation of the probability of failure. The only other uncertain
parameter of any importance is the drag coefficient. The remaining parameters can be
taken as deterministic quantities.
The main finding of the study is that for conditions with equal factors of safety accord-
ing to the established design practice the probability of failure due to overturning is
considerably higher than the probability of chord failure due to high stresses.
The actual figures for the probabilities of failure can be discussed. For example the
selection of a standard 2-D wave theory may be a conservative element. However, since
leg stress response and overturning moment depend on the load model in the same
manner, this model uncertainty cannot alter the overall conclusion that the present codes
of practice do not ensure a reasonable uniformity of risk level for different failure modes.
Acknowledgement
The research work presented in this paper was partially sponsored by The Danish Tech-
nical Research Council (Grant no. 5.26.09.06) and by the Vetlesen Foundation. The
reliability analysis program PROBAN was used for the reliability calculations. Valuable
discussions with Professor A.E. Mansour during the first authors visit to Berkeley is very
appreciated.
References
[1] Jensen, J.J., Mansour, A.E. and Pedersen, P.T.: "Reliability of Jack-Up Platform Against
Overturning", DCAMM-report No. 399, November 1989, Lyngby, Denmark, (Submitted to J. of
Marine Structures).
[2] Madsen, H.O., Krenk, S. and Lind, N.C.: "Methods of Structural Safety", Prentice Hall Inc.,
Englewood Cliffs, New Jersey, 1986.
[3] Odland, J.: "Response and Strength Analysis of Jack-Up Platforms", Norwegian Maritime
Research, No.4, pp. 2-25, 1982.
[4] Dansk Ingeni"rforening's Code of Practice for Pile-Supported Offshore Steel Structures, DS 449,
Translation Edition, September 1984, DIF-ref. No. NP-162-T, Teknisk Forlag, Copenhagen,
Denmark.
[5] Kje"y, H., B"e, N.G. and Hysing, T.: "Extreme Response Analysis of Jack-Up Platforms",
Proc. Second Int. Conf. on The Jack-Up Drilling Platform, Ocean Eng. Research Center, Dep.
of Civil Engineering, London, September 1989.
[6] Dansk Ingeni"rforening's Code of Practice for sikkerhedsbestemmelser for konstruktioner (Safety
of Structures), DS 410, Danish Edition, June 1982, DIF-ref. No. NP-157-N, Teknisk FOrlag,
Copenhagen, Denmark.
[7] Kim, Y.K. and Hibbard, H.C.: "Analysis of Simultaneous Wave Force and Water Particle
Velocity Measurements", Proc. Offshore Technology Conference, OTC paper No. 2192, 1975.
[8] Hogben, N., Dacunha, N.M.C. and Olliver, G.F.: "Global Wave Statistics", British Maritime
Technology Ltd., UK, 1986.
OPTIMUM CABLE TENSION ADJUSTMENT
USING FUZZY REGRESSION ANALYSIS
1. INTRODUCTION
To determine the optimum cable pre-stresses (Le., prestress) in the design of cable-stayed bridges is
one of the most important, but time consuming, procedures. Various kinds of errors will be introduced during
construction. Therefore, cable length adjustment is necessary to alter the stress distribution and the
geometrical configuration of the bridge. The authors have developed new methods to overcome these
problems through the use of the fuzzy set theory. First, a method is formulated for obtaining optimum cable
prestress. Secondly, a new system identification method is exploited by applying fuzzy regression analysis.
Finally, a method is formulated to adjust the cable length by shim plates. The results of numerical examples
show that the proposed methods are not only simple to handle, but also very practical for the design and
construction of cable-stayed bridges.
The flow-diagrams in Figs. 1 and 2 show the applications of fuzzy set theory to the design and erection
of suspended structures (Le., cable-stayed bridges, suspension bridges, etc.)
198
Fuzz), PrNU",
"-.hod IFPS)
3. FORMULATION
where. Fd is structural member force due to dead load; Xi and Ki are. respectively. fuzzy variable and
member force influence coefficient by unit pre-stress of the cable. The wave symbol(Le .• - ) indicates fuzzy
sets which are specified in terms of membership function as shown in Fig. 3. If the design goals (=;:0) of
bending moment at a certain nodal point is given by the region between 1000 and 1200 tfm. then one can
assume that Fo is 1100 tfm and d F is 100 tfm. Eq. (1) is given by the assumption that the member force Fo
has fuzziness. Fuzzy variable Xi is obtained by solving the following maximum problem of fuzzy regression
analysis. Le .• a linear programming problem [4].
Find ai. q
Nl Ml
maximize J(C\)=L: L:c\·IKjil ......................... (2)
i=l j=l
1.0 ---------------
Nl Nl
subject to Foj:';;;Fdj +Cl-h) L:ct IKjil-Cl-h) dFj+L:aIKji (3)
i=l i=l
Nl Nt
-Foj:';;; -Fdj+(1-h) L:ci I Kji I-Cl-h) dFj - L:aiKji (4)
i=I i=l
N2
Z = L; Ai • F i ...................... (6) (Fi: Error mode vector, Ai replaces ai, which was
i~l assumed constant in the conventional SI method)
where the fuzzy coefficient Ai is called the error contribution rate which has to be determined. If fuzzy
regression analysis is applied with the introduction of the threshold parameter h (0:5 h< 1), the error
contribution rates will be given by Eqs. (7)-(10) below. Though the minimum problem is also discussed in
reference [5], only the maximum problem is discussed here for the consistency of methodology. The
algorithm of FSI is as follows:
Find ai. Ci
N2 M2
maximize J (Ci) = L; L; Ci' I Fji I ............................................... (7)
i=l j=l
N2 N2
subject to Z j ;;;; (I - h ) L; Ci I F j i I - (I - h )e; + L; a I F j i ............................. (8)
i=l i=I
N2 N2
-Zj;;;;(1-h)L;ciIFjil-(1-h)ej-L;aiFji ............................. (9)
i=l i=l
where
M2 = the number of field measurement items
N2 = the number of error factors
Zj = the components consisting of camber errors and member force errors.
Fj i = j-component of error mode vector Fi
Ci ,ai = parameters of the membership function lllVa)
ej = measurement (Le., fuzzy output) error
h = fitness (threshold) parameter in dealing fuzzy data (0 :5 h < 1»
Let Y= (Y • .4Y) be the designers' intended or judged tuning goal. for example. for member forces or
displacements by shim adjustment. A shim plate determination algorithm can be formulated in a way similar
to 3.1. if Si is assumed to be the influence coefficient of cable tension force. camber change. or bearing
action force by the unit shim plate thickness.
Na
y= ~Bi· Si ................................................................ (11)
i=l
Find ai. Ci
Na Ma
maximize J(Ci)=~ ~ci·1 Sji I (12)
i=l j=l
Na Na
subject to Yj2:Cl-h)~ciISji I-Cl-h) dYj+~aiSji ........................ (13)
i=l i=l
Na N3
-Yj 2: Cl-h) ~ Ci I Sji I-Cl-h) dYj - ~ai Sji ........................ (14)
i=l i=l
where
M3 = the number of member force
N3 = the number of cable members
Yj = central value of the tuning of j-component
.4Yj = one half of the difference between upper and lower bounds of the tuning of j-component
Sj i = influence coefficients of j-component of member force or camber due to unit shim adjustment
of the cable (i)
Ci .ai= parameters of the membership function ~Si(a)
h = fitness (threshold) parameter in dealing fuzzy data (0 :s; h < 1)
4. Numerical examples
w (tf/m)
Girder
7.0
Pavement, etc.
Tower 5.0
Cable 0.05~0.10
201
202
203
W
$~204
205
556
555
554
553 552
501 551
107 207
+
"
"
16
500~+1200
"
" 17
-1000~+1000
"
"
Cable Tension 501
500~+
550~
500
65 Ott
"
" 502
503
300~ 350 "
" 504
170~ 250 "
" 170~ 250 "
" 505
506
170~ 250 "
" 170~ 250 "
205
Error Precise
FSI (h=O.5, ej=2.0) SI method
mode No. Estimation
al -0.5 -0.49972 -0.49723
a2 +0.5 0.49995~0.50001 0.50004
a3 +0.6 0.59673~0.59797 0.57924
a4 -0.4 -0. 40173~- 0.39908 -0.40025
as 0.0 -0.00293 -0.00771
a6 +1.0 0.99978~1.00000 0.99994
207
5. Conclusion
1) The algorithms of FPS, FSI and FSA are simple to implement; therefore, one can reach solutions, which
are intended or justified by structural designers, in a short time.
2) Determination of pre-stress and shim adjustment were made shorter and improved by the FPS and
FSA, respectively.
3) An extensive estimation of the erection state of the bridge can be derived from the FSI; therefore, more
accurate erection management is possible.
4) All of these methods are reduced to linear programming problems. Therefore, existing subroutine
packages are available and can be used readily for system integration.
5) Since the designer's intention can be formulated in the form of design goals and design tuning, they are
effectively included in the cable adjustment for complicated structures such as steel-concrete cable-
stayed bridges and partially anchored cable-stayed bridges.
The practical application of these methods has been planned and they will be improved more by
experience.
REFERENCES
[1] Tanaka, H., Kamei, M. and Kaneyoshi, M.: Cable Tension Adjustment by Structural System Identifica-
tion, CABRIDGE, Bangkok, Nov., 1987
[2] Fujisawa, N. and Tomo, H.: Computer-aided Cable Adjustment of Stayed Bridges, IABSE Proceedings
P-92/85, pp. 185-190, 1985
[3] Yamada, Y., Furukawa, K., Egusa, T. and Inoue, K.: Studies on Optimization of Cable Pre-stresses for
Cable-stayed Bridges, Proceedings, JSCE, Vol. 356/1-3, pp. 415-423, 1985
[4] Terano, T., Asano, K. and Sugeno, M.: Introduction to fuzzy system, Ohmu-sha, pp. 67-81, 1987 (in
Japanese).
[5] Kamei, M., Furuta, H., Kaneyoshi M. and Tanaka, H.: Cable Tension Adjustment by Fuzzy Structural
System Identification, Proc. 44th Annual Meeting of JSCE, Part 1-160, Oct., 1989 (in Japanese)
APPENDIX
Notation
Ai = fuzzy coefficient of linear equation
Ci scatter of membership function of Ai,Si and Xi (Fig. 3)
ej measurement (Le., fuzzy output) error
Fd structural member force vector due to dead load
Fi = error mode vector
F ii = j-component of error mode vector Fi
209
Abstract
A Bayesian approach for assessing model uncertainty and including its effect in structural reli-
ability analysis is presented. Model uncertainties due to formulation inexactness, measurement
error and insufficient data are included. Simple formulas are derived that directly show the effect
of model uncertainty on the reliability index.
Introduction
An important source of uncertainty in structural reliability is the uncertainty inherent in the
mathematical models employed to describe the behavior or the limiting state of a structure. An
expression describing the capacity of a member or the failure mechanism of a structural system is
such a model. In most engineering applications, mathematical models are idealizations and
describe the reality only within an unknown approximation. For a meaningful evaluation of struc-
tural safety, therefore, it is essential that the uncertainty associated with each employed model be
assessed and incorporated in the reliability analysis. In this paper, a formulation based on the
Bayesian statistical approach is proposed for this purpose.
In the context of this paper, a "model" is a set of one or more mathematical expressions relat-
ing a set of measurable or observable quantities x = (x 1> X 2,
• . . ,xn), denoted herein as variables.
To formulate these expressions, one normally introduces a set of constants 8 = (81)8 2, ••• ),
denoted model parameters, which stand for the inherent properties of nature or of the structure
under consideration. These parameters mayor may not have physical meaning and usually are not
observable. Some might be known constants (e.g., the gravity acceleration), while others are unk-
nown and must be estimated in the process of developing the model. Our concern here is with the
unknown parameters.
Before discussing general forms of the model, we consider two examples. The first is a well
known model describing the flexural capacity, Mil, of a singly-reinforced, under-balance, concrete
beam of rectangular cross section (Park and Paulay 1975)
In this model, the cross-sectional area of the reinforcing bar, A., the yield stress of the bar, I" the
nominal compressive strength of concrete, Ie' and the beam dimensions, b and d, are the model
212
variables, whereas <I> and 'YJ are the model parameters. The second example concerns the liquefac-
tion failure of saturated sandy soils during cyclic ground motions generated by earthquakes.
Motivated by the work of Seed et al. (1985) and Liao et al. (1988), the limit-state surface, describ-
ing the boundary between liquefaction and no liquefaction events, may be modeled by
8 (r, N ,8) '" exp[ - (8 1 + 82 lor + 83 N )] -1 = 0 (2)
with 8 ~ 0 denoting the liquefaction event and g > 0 denoting the no liquefaction event, in which
the normalized cyclic stress ratio, r, and the standard penetration resistance, N, are the model vari-
ables, and 8t> 82 are 8 3 are the model parameters. The above two examples, both representing
single-equation models, correspond to two fundamentally different situations in model estimation,
and this difference will be elaborated on later in this paper.
In the most general case, an m -equation model is described by a set of m equations of the
form
g;(x, 8) =0 i=1,2,"',m (3)
where x is the vector of model variables and 8 is the vector of model parameters. The functions
8/(.) may have algebraic forms, such as in Eqs. 1 and 2. More generally, however, they may have
differential or integral forms. The above form is known as a structural nwdel (Bard 1974). The
second example above is of this form. Often the primary purpose of a model is to predict the
values of a subset of (dependent) variables, y, for future observations of the remaining (indepen-
dent) variables, x. To show this, we rewrite the model in Eq. 3 in the form
g;(x,y,8) = 0 i=1,2,···,m (4)
Obviously, the dimension of y should not be greater than the number of equations, m. If Eqs. 4
can be solved for y (when the dimension of y is equal to m ), the model can be written in the more
convenient form
This is known as the reduced model (Bard 1974). The first example above is of this form.
The functions gl (.) are usually prescribed by the engineer or scientist in accordance with laws
governing physical processes relevant to the problem under consideration. Equation 1, for example,
satisfies the laws of equilibrium for the forces acting perpendicular to the cross section of the beam.
In some instances, however, the form of the model is selected without physical reasoning and only
on the basis of intuition or prior experience. This is the case for the example described by Eq. 2.
This model was selected by Liao et al. (1988) to simulate a curve judgmentally drawn by Seed et
al. (1985) through observed liquefaction data. In either case, the development of the model nor-
mally requires idealization, either for reasons of convenience or because the underlying processes
are not well understood. For example, the idealism in the model in Eq. 1 is partly for conveni-
ence, since more refined models of the flexural capacity employing the complete stress-strain
diagrams of concrete and reinforcing bars are available (e.g., see Park and Paulay 1975). On the
other hand, the idealism in the model in Eq. 2 for liquefaction is largely due to our lack of under-
standing of the liquefaction phenomenon.
213
The main issue in model development is the estimation of the parameters II based on a meas-
ured set of values Xl (and Yl for the reduced model), k = 1,2, ... , n, of the variables which may be
obtained from experiments or observations in the nature. We call this model identification. This is
an old problem and an extensive body of literature on this subject is available (e.g., see Bard
1974). In particular, the widely used methods of least squares and maximum likelihood are aimed
at estimating best-fit values of II that minimize an error measure and maximize a likelihood meas-
ure, respectively. Methods developed in the field of system identification (e.g., Beck 1989), where
the parameters defining the model of a system are estimated based on the observed response to
known excitations of the system, are also directly relevant. In the context of structural reliability,
however, the least-square and maximum-likelihood methods are not adequate since they do not
provide measures of the model uncertainty. The Bayesian approach presented in this paper, on the
other hand, provides a convenient and logical means for assessing model uncertainty and incor-
porating its effect in reliability analysis. While this approach has been used in other fields for some
time (see Bard 1974), to the author's knowledge, this paper is the first to use it in the context of
structural reliability analysis.
The topic of model uncertainty has received limited attention in the field of structural reliabil-
ity. The only significant work is that of Ditlevsen (1982, 1988), where a framework in the context
of second-moment reliability analysis was developed. His method essentially amounts to modifying
the second-moment properties of the basic random variables through a transformation, the elements
of which are judgmentally determined. The approach is useful for probabilistic code development
and, in fact, is formulated to justify the recommended practice in several existing codes. However,
it does not make use of statistical data to assess the model uncertainty and as such lacks objectivity.
Furthermore, it is not clear how the method can be used in more advanced reliability methods
employing distributional information.
error, even if it were based on an exact fonnulation. This is denoted herein as model uncertainty
due to measurement error.
The third source of model uncertainty lies in the statistical estimation of the parameters from
limited data when uncertainties due to model inexactness or measurement error are present. If
these latter sources of uncertainty are not present, then a sample of size equal to the dimension of a
is necessary to find the solutions for a, provided they exist and are unique. In the presence of
model inexactness or measurement error, however, no amount of data can provide exact solutions
for a and the true values of these parameters remain unknown. Furthennore, it is often the case in
engineering that the sample size of observations is small, and this leads to further uncertainty in the
estimation of the parameters.
Consistent with the Bayesian notion of probability (Lindley 1985), we express our uncertainty
in the parameters a in terms of a probability density function (PDF), 1.(0). All three broad
sources of model uncertainty (i.e, fonnulation inexactness, measurement error and limited sample
size) contribute to this probability distribution. Thus, this distribution captures the essence of
model uncertainty. In the following section, we discuss how this distribution is determined by the
well known Bayesian updating rule.
in which L (a) is the likelihood function, c = [I L (a)!' .(0) dOj-l is a nonnalizing factor, and 18(0)
is the updated posterior distribution of a. For a given set of observations, the likelihood function is
proportional to the conditional probability of making the observations, given the value a of the
parameters. It represents the objective infonnation contained in the observed data. The posterior
distribution incorporates both the prior infonnation (which might be entirely subjective) as well as
the objective infonnation through the likelihood function.
Guidelines for the selection of prior distributions, including the case where no prior infonna-
tion is available, are discussed by Jeffreys (1961) and Box and Tiao (1973), among others. In the
following, we present fonnulations of the likelihood function for different types of models, uncer-
tainty situations and observations. For simplicity in this presentation, we only consider single-
equation models. The extension to multi-equation models is straightforward.
Likelihood Functions
(a) Exact Reduced Model
Suppose the model
y = g(x,O) (7)
215
is exact, but the measured values Yk' k = 1, ... ,n, of the dependent variable Y are in error. (The
independent variables, x, are assumed to be measured accurately. For the more general case see
(c) below.) Let ek = Y1 -Yk denote the error in the k-th experiment or observation. Substituting in
Eq. 7 and rearranging terms, we obtain
(8)
Let f.( e 11) denote the joint PDF of e = (e 1, . . . ,en)' where 1) denotes the set of distribution
parameters. The required likelihood function then is
(9)
Here we have used the sign - to denote proportionality, since we are not interested in the constant
coefficient of the likelihood function that can be taken care of through the normalizing factor c .
The distribution f .(e 11) depends on the procedure used for measuring Y and its parameters, 1),
can be determined by proper calibration of the measuring devices. Most commonly, errors at suc-
cessive measurements are assumed to be statistically independent and normally distributed with zero
mean (after removal of the systematic component of the error) and a common standard deviation,
a. In that case, Eq. 9 takes the form
L(O,a) -
1 1
--exp - -lr ~
n rYk -g(Xb O)
----
121 (10)
an L 2 k =1 ~ a ) J
If the distribution parameters1) (e.g., a in Eq. 10) are unknown, they must be considered as addi-
tional uncertain parameters to be estimated through the updating rule in Eq. 6. For notational con-
venience, one might include these parameters as a subset of the vector O.
(b) Inexact Reduced Model
Suppose the model in Eq. 7 is inexact such that for the k-th observation a random correction
term 'Yk has to be added to maintain the equality, i.e,
The term 'Yk may be representing the influence of the "missing" variables and/or the inexact form of
the model equation. Let f,c'Y 11)') denote the joint PDF of 'Y = ('Y!> .•• ,'Yn), where 1)' are the
distribution parameters. With no measurement error, the likelihood function obviously has the
same form as in Eq. 9 with Yk and 1) replaced by Yk and 1)', respectively, and the subscript e
replaced by 'Y. The distribution parameters, 1)', however, are more difficult to determine in this
case, since this would require calibration with an exact model which may not exist. In many cases,
it is appropriate to assume 'Yk as statistically independent normals with zero means (to generate an
unbiased model) and a common standard deviation, a. The form of the likelihood function in Eq.
10 then applies. On the other hand, if 'Yk are assumed to be correlated normals with zero mean
and covariance matrix l:, the likelihood function becomes
where 'YT = (Yl-g(xl>9), ... ,Yn -g(xn,9» must be substituted. Again, if l: is unknown, its ele-
ments may be considered as a subset of 9 to be estimated by the updating rule in Eq. 6.
Now suppose there is also error in measuring y. Eqs. 8 and 11 can be combined to read
(13)
The form of the likelihood function obviously is the same as before if we use the joint distribution
of 'Y + e. Often it is not possible to distinguish the two error vectors and one is forced to assign a
distribution to the sum. For example, if the sum 'Y + e is assumed to be normal with zero mean
and covariance matrix l:, the likelihood function will take the form in Eq. 12 with 'Y replaced by
'Y + e with elements 'Yk + ek = Yk - g (Xl> 9).
which includes the unknown true values Xl' These must satisfy the set of n equations
g(xl>9) = 0, k = 1, ... ,n (16)
Hence, the likelihood function has an implicit dependence on 9. Note that one could at most elim-
inate n of the unknown true values in the likelihood function by solving Eqs. 16. The remaining
variables will have to be treated as uncertain parameters to be estimated by the updating rule in Eq.
6. After the updating is done, one may integrate the posterior distribution over these parameters to
obtain the joint distribution of 9 and "I.
In some applications the available data consists of measured values Xl, k = 1, ... ,n, for
which the signs of g (xl, 9) are observed. We denote such data as failure/no failure data. (For the
liquefaction model, such data can be generated by observing liquefaction or no liquefaction events
for different sets of measured values of Nand r. Obviously, such observations are much easier to
conduct for the liquefaction phenomenon than the observations on the limit surface.) Assuming the
model is exact but the measured values are in error, the likelihood function takes the same form as
in Eq. 15, but the equality constrains in Eq. 16 must be replaced by the inequality constrains
g (Xl' 9) sO, if event g sO (failure) is observed
> 0, if event g > 0 (no failure) is observed (17)
217
In this case none of the unknown parameters Xl can be eliminated from the likelihood function;
however, the inequalities impose bounds on the acceptable range of the parameters.
(d) Inexact Structural Model
Suppose the structural model in Eq. 14 is inexact, such that in the k-th experiment a random
error term 'Yk must be added to maintain the equality, i.e.,
Let f ,('Y I 'IJ') denote the joint PDF of 'Y = ("'(1> ... , 'Yn) and assume there are no measurement
errors. If Xk, k = 1, ... , n, are limit-surface data, the likelihood function takes the form
(19)
However, if Xk are failure/no failure data, the likelihood function takes the form
in which P [.] denotes the probability and F and F respectively denote the sets of experiments where
the failure and no-failure events are observed. When 'Yk are statistically independent, the preceding
can be written in the form
(21)
in which F ,J.] and F..,J.] respectively denote the cumulative distribution function of 'Yk and its
complement.
If measurement errors are present, the likelihood function with the limit-surface data becomes
(22)
in which E is the collection of error vectors defined earlier. In this case no restriction applies to the
true values of the variables and the entire set must be treated as uncertain parameters. If the avail-
able data is of failure/no failure kind, the likelihood function becomes
Again, if 'Yk are independent, the expression in Eq. 21 in terms of the cumulative distribution func-
tions can be used.
It is seen that the structural model in general may have many more uncertain parameters to be
estimated. For this reason, whenever possible, it is desirable to use the reduced model in Eq. 7,
provided the independent variables can be measured accurately. If this is not possible, simplifying
assumptions may have to be made. This might include the assumption of independence and identi-
cal distribution for the elements of ek and 'Yk for the different experiments. Furthermore, if the
measurement error for a subset of the variables is judged to be small, the corresponding measured
218
values can be considered to represent the true values, thus reducing the number of unknown param-
eters.
(26)
In these expressions, Mo and l:oo are the mean vector and covariance matrix of 0, respectively, and
Vo~ denotes the gradient row-vector of ~(O) with respect to the elements of 0 evaluated at the mean
point. Clearly, the standard deviation <Til is a measure of the influence of parameter uncertainties
on the reliability index. It is worth noting that the gradient vector Ve~ is easy to compute in the
first-order reliability method (FORM) as well as certain simulation methods.
In decision applications, one is usually interested in the expected value of the conditional pro-
bability of failure,
These have been defined as predictive measures of safety (Der Kiureghian 1989). The integral in
Eq. 27 can be computed by any reliability analysis technique, e.g., first or second-order reliability
method (FORM and SORM), simulation methods, by simply considering the uncertain parameters
o as a subset of the basic random variables X.
Another approach for computing the above measures is to use a "nested" reliability analysis.
Consider a reliability problem defined by the limit-state function
g = u + ~(O) (29)
in which u is a standard normal variate (zero mean and unit standard deviation), which is indepen-
dent of O. As shown by Wen (1987) in a slightly different formulation, the solution of this prob-
lem is identical to PI' since one can write
1
u
I
+ lI(e) S 0
JuCu)Jo(O)dudO = I
0
Ilr <t>-·[P, (e)]
I
-'"
Ju(u)du JeCO)dO
J
= IpI(O)JeCO)dO
0
(30)
which is identical to the first expression on the right-hand side of Eq. 27. Following this approach,
for each value of 0 one solves the "inside" conditional reliability problem defined by Eq. 24, and
then solves the "outside" reliability problem defined by the limit-state function in Eq. 29. Note that
the "inside" reliability problem will have to be solved for each value of O. Any of the conventional
reliability analysis methods, e.g., FORM, SORM, simulation, may be used for each of these prob-
lems. As one can see, the nested reliability approach solves two smaller reliability problems (one
involving the random variables X alone and another involving 0 and u) in a nested manner, as
opposed to a direct approach involving all the random variables X and 0 at the same time. The
nested approach is useful if one can justify a simplified approach for either the "inside" or the "out-
side" problem than one can justify for the direct approach.
The nested reliability approach offers a simple approximation of the predictive reliability index.
Noting that the reliability index is approximately equal to the ratio of the mean to the standard
deviation of the limit-state function, using Eq. 29 one has
220
Together with the approximate relations in Eqs. 26, the preceding equation gives a simple fonnula
for detennining the amount by which the "mean" reliability index (computed based on the mean
values of the parameters) decreases in account of the parameter uncertainties. In general, the
approximation will be good, provided the uncertainty in 8 does not dominate the reliability prob-
lem.
Acknowledgment
A part of this work was carried out while the writer was a Visiting Professor of Systems
Analysis of the Mitsubishi Heavy Industry Chair at the Research Center for Advanced Science and
Technology of Tokyo University, Japan. The support provided during this visit is gratefully ack-
nowledged.
References
Bard, Y. (1974). Nonlinear parameter estimation. Academic Press, Inc., Orlando, Florida.
Beck, J.L. (1989). "Statistical system identification of structures." Proc. 5th Int. Conf. Struc. Safety
and Reliab., San Francisco, CA., 2, 1395-1402.
Box, G.E.P. and G.c. Tiao (1973). Bayesian inference in statistical analysis. Addison-Wesley
Pub. Co., Inc., Reading, Mass.
Der Kiureghian, A. (1989). ''Measures of structural safety under imperfect states of knowledge." J.
Struct. Eng., ASCE (in press).
Ditlevsen, O. (1982). ''Model uncertainty in structural reliability." Struc. Safety, 1(1),73-86.
Ditlevsen, o. (1988). ''Uncertainty and structural reliability. Hocus pocus or objective modeling."
Report No. 226, Department of Civil Engineering, Technical University of Denmark, Lyngby
1988.
Jeffreys, H. (1961). Theory of probability, 3rd ed. Oxford University Press, London.
221
liao, S.S.c., D. Veneziano and R. Withman (1988). "Regression models for evaluating liquefac-
tion probability." J. Geotech. Eng., ASCE 115(5), 1119-1140.
lindley, D.V. (1985). Making Decisions, 2nd Ed. John Wiley & Sons, London, U.K.
Park, R. and T. Paulay (1975). Reinforced concrete structures. John Wiley & Sons, New York,
NY.
Seed, H.B., et al. (1985). '1nfluence of SPT procedures in soil liquefaction resistance evalua-
tions." J. Geotech. Eng., ASCE 111(12),1425-1445.
Wen, Y.K., and H.C. Chen (1987). "On fast integration for time variant structural reliability."
Probab. Eng. Mech., 2(3), 156-162.
SIZE EFFECT OF RANDOM FIELD ELEMENTS
ON FINITE-ELEMENT RELIABILITY METHODS
Pei-Ling Liu
Institute of Applied Mechanics
National Taiwan University
1 Introduction
Let the vector V denote the set of basic random variables pertaining to a structure,
and assume the joint probability density function (PDF) Iv (v) is known. The basic
random variables may include parameters defining loads, material properties, structural
geometry, etc ..
Failure criteria of structures are usually defined in terms of the basic random variables,
V, and a load effect vector, S, such as stresses and deformations. Then, V and S are
related through the mechanical transformation
S = S(V) (1)
For all but trivial structures, this transformation is available only in an algorithmic sense.
The finite element reliability methods are reliability methods which use finite element
analysis to compute the load effect vector. In accordance with the failure criteria, one
can formulate a limit-state function such that g(v, s) > 0 defines the safe state, g(v, s) :S
o defines the failure state, and g(v, s) = 0 defines the limit-state surface. Then, the
probability of failure of the structure is
P, = [ Iv (v)dv (2)
ig(v,.)$.o
Y = Y(V) (3)
where the elements of Yare statistically independent and have the standard normal den-
sity. Such a transformation is not unique. The selection of an appropriate transformation
is based on the distribution of V [4,8].
Der Kiureghian and Liu [4] suggested a probability transformation which is particu-
larly useful in the finite element reliability methods. In this method, a joint distribution
model, originally introduced by N ataf [12], with prescribed marginal distributions and
correlation matrix was proposed. The joint PDF of V is defined such that the variables
Z = (Zl,"" Zn) obtained from the marginal transformations
are jointly normal, where F y ;(.) denotes the marginal distribution of Vi, and CJi(·) denotes
the standard normal cumulative probability. Since Z;'s are joint normal with zero means
and unit standard deviations, it is completely defined by its correlation matrix. The
correlation coefficient P Zi,Zj of Zi and Zj can be expressed in terms of the marginal
distributions and correlation coefficient of Vi and Vj through an integral relation [4]. The
transformation to the standard normal space for the above distribution model, then, is
given by
(5)
in which L z is the lower triangular matrix obtained from the Cholesky decomposition of
the correlation matrix of Z.
In the first- and second-order reliability methods (FORM and SORM), one searches
for the nearest point on the limit-state surface to the origin in the standard normal space
by solving the constrained optimization problem:
minimize yTy v
subJect to G(y) = 0
(6)
where G(y) is the limit-state function in the y space. The optimal solution can be found
by the HL-RF method [6,13], which is based the following recursive formula:
(7)
where y" and yl;+l are the y at the kth and (k + 1)th iterations, respectively, and VG(y)
is the gradient of the limit-state function with respect to y. The optimal point, Y*, is
called the design point, and the minimum distance, denoted {3, is called the reliability
225
{J = [VG(Y)Y] (8)
IVG(Y)I y.
In the first-order reliability method, the limit-state surface in the standard normal
space is replaced by the tangent hyperplane at the design point. The first-order estimate
of the probability of failure, then, is
(9)
In the second-order reliability method, the limit-state surface in the standard normal
space is fitted with a second-order surface, usually a paraboloid [3]. The second-order
estimate of the probability of failure is computed in terms of (J and the curvatures of the
fitting paraboloid.
In finite-element reliability analysis, random fields are often used to describe the uncer-
tainties which possess spatial variability, for example, to describe the Young's modulus of
a plate or the intensity of a distributed load. In most applications, the Gaussian random
field is assumed because of convenience and lack of alternative models. However, the
Gaussian model is not applicable in many situations. For example, it cannot be used
model the Young's modulus of a material, which is always positive.
Grigoriu [5] and Der Kiureghian [1] proposed that the Nataf distribution model be
used to model non-Gaussian fields with prescribed marginal distribution and mean and
autocorrelation functions. Let the random field W(x) have the marginal CDF Fw(w(x)),
where x is the position vector. The random field is completely defined by assuming that
the transformed process
is Gaussian with zero mean, unit variance and autocorrelation coefficient function pzz (Xi, Xj).
For any set of Xi and Xj, PZZ(Xi,Xj) can be calculated in terms of the marginal distribu-
tions and correlation coefficient of W (Xi) and W (Xj) [1,5]. This model is very useful in
the finite element reliability methods.
midpoint method [2,7,16], the interpolation method [11], and series expansion methods
[9,14]. The midpoint method is adopted in this study, since the other methods are strictly
applicable only to Gaussian random fields [10]. In the first three methods the domain of
the field is discretized into a mesh of random field elements (not necessarily coinciding
with the finite element mesh), and the value for each element is described by a single
random variable.
The selection of the random field mesh is an important task in finite element reliabil-
ity analysis. Three factors should be taken into account in the selection of random field
mesh, namely, accuracy, stability, and efficiency. In view of accuracy, the element size is
controlled by the correlation length of the random field. The correlation length of a ran-
dom field is a measure of the distance over which the correlation coefficient approaches
a small value. Hence, smaller random field elements should be used if the correlation
length is large. The second factor is the numerical stability of the transformation to the
standard normal space. If the random field mesh is excessively fine, the discretized ele-
ment variables are highly correlated and their correlation matrix is nearly singular. The
probability transformation then may become numerically unstable. Hence, this factor
provides a lower bound on the element size. The last factor is the efficiency of reliability
analysis. It is obvious that a smaller number of basic variables would require less compu-
tation time. This is especially true when the gradient of the limit-state function is to be
computed by a finite difference scheme. Thus, the elements should be as large as possible
from this viewpoint. Note that the above controlling factors are different from the factor
in the selection of finite element mesh, which is basically governed by the gradient of the
displacement field.
The selection of appropriate random field mesh has been addressed by Der Kiureghian
and Ke [2]. They have suggested that separate meshes be used for the finite elements
and for each of the random fields. In general, it is appropriate to use a finite element
mesh that satisfies the requirements based on the displacement gradient and the corre-
lation lengths of all random fields, and then for each random field choose a mesh that
is coincident with or coarser than the finite element mesh such that the corresponding
probability transformation remains stable. Their suggestions provide a useful guideline
for the selection of random field mesh. However, how the size of random field elements
influences the accuracy and stability of reliability analysis remains unexplored.
There are difficulties in investigating the effect of element size on the stability and
accuracy of reliability analysis. Suppose the random field is described by the N ataf model.
It is natural to apply the probability transformation in Eqs. 4 and 5 to transform the
discretized element variables into the standard normal space. This requires the Cholesky
decomposition of the correlation matrix. The decomposition may break down because of
numerical errors. Since numerical errors are dependent on the solution algorithm and the
precision of real numbers in the code, it is difficult to derive a general guideline for the
lower bound on the element size. Secondly, as mentioned before, the reliability analysis
cannot incorporate random fields. Hence, there is no exact solution that can be used to
check the accuracy of the analysis for a certain random field mesh. However, it is believed
227
that the finer the mesh is, the better the random field is represented. Therefore, error
analysis can proceed by comparing the results of a coarse mesh and a refined mesh.
Suppose the reliability of the continuum in Fig. 1 under certain loads is investigated. Let
the uncertainties in this problem be modeled by a set of random variables V and a random
field W, where V and Ware statistically independent. Following Der Kiureghian and
Ke's suggestion [2], the domain of the continuum is first discretized into a set of finite
elements. Now consider two discretizations of the random field W (see Fig. 1); both
random field meshes are included in the finite element mesh. The fine mesh contains n
elements, and the coarse mesh contains k super elements. Here super element means that
it is a collection of one or more elements in the fine mesh. In other words, the fine mesh
can be divided into k blocks, and each block is coincident with a super element in the
coarse mesh. Note that the finite element mesh remains the same in both cases. That
is, the mechanical properties of the continuum remain the same despite of the different
discretizations of the random field.
The discretized element variables associated with the coarse and the fine meshes are,
respectively,
(11)
and
(12)
where Wi represents the value of W in the ill> super element of the coarse mesh, and Wi
represents the values of W in the ill> block of the fine mesh.
Let the random field be modeled by the N ataf model. Since V and Ware statistically
independent, the probability transformation for the coarse mesh is
(13)
where z is obtained from w using Eq. 10, and L z is the lower triangular matrix obtained
from the Cholesky decomposition of the correlation matrix of Z, C z . It is easily derived
from Eq. 8 that
(14)
228
where VyV G and V.g are the gradients of the limit-state function with respect to Yv and
z, respectively, and Yv* and z* are the values of Yv and z at the design point, respectively.
--,..--,--
I ,
-
I I
--!---.l---
, I
--,--r--
I I
finite element mesh
I I
I I I I
--r--f---
I I
--,.--,.--
I ,
random field mesh
--,--,--
I I I I
I , __ ..1 __ .1_
I
--1---1---
I ,
--,..-
I I
I I
, I
--f---~--
, ,
, I
I ,
--r--l---
, I
,
, I
I
__ -I- __ .l __
, I
y = { ;: } = { (IS)
and
p= (Hi)
where the bar - denotes the correponding quantity for the refined mesh. Note that the
probability transformation for V remains the same in the latter case.
In general, the continuum and the loading are not the same for the two discretizations
at the design point. To compare (J and p, one in fact has to solve two optimization
problems entirely. In order to avoid solving the optimization problem twice, one can
do the following: First find the design point, Y*, for the coarse discretization. Let the
corresponding design point in the original space be [v*,w*]. Now assume that for the
refined mesh,
(17)
(IS)
It follows that
(20)
For this special realization, the two continua and their loadings are identical. Observe
that the values of the limit-state function and its gradient in the original space are only
dependent on the mechanical properties and loading of the continuum, not the probability
transformation. It follows that
230
::; Iyv.,z. = L
element iEblock i
a(~Y) ·1
1 J Yv *.i.
(22)
P=(J (23)
It is seen that except C Z and V'.y all the quantities in the above expression are available
from the coarse mesh solution. In fact, if the gradient is computed analytically, V'IY can
also be obtained using the coarse mesh solution with little extra computation. This is
because in that case V' zg is obtained by computing the gradient of response with respect to
the loading, properties, or nodal coordinates of each finite element and then collecting the
contributions from all the finite elements in a random field element. Since the gradient
with respect to each finite element is readily available from the coarse mesh solution,
one can simply reassemble these quantities according to the refined mesh. On the other
hand, if the gradient is computed by a finite difference scheme, V' jY can be obtained by
a deterministic solution for the gradient. No iteration is required.
Another way of viewing the estimation is through the optimization process. When
searching for the design point for the refined mesh, one can choose an initial point ac-
cording to Eqs. 17 and 18. This is the best choice possible with the information ready to
hand. Suppose the recursive formula in Eq. 7 is applied to search for the design point,
and only one iteration is performed. Since O(y;*) = 0, one gets a reliability index as
estimated by Eq. 23. It is known that the HL-RF method often converges fast, unless
231
the limit-state surface fluctuates rapidly near the design point. This explains why Eq. 23
usually provides a good estimate of fj and is better than JYi *T Yi*.
Recall the three requirements on the selection of random field mesh, namely, accuracy,
stability, and efficiency. If a random field is discretized into a coarse mesh, the second and
the third requirements will be satisfied, but not the first one. With the above approach,
the reliability index for the refined mesh can be estimated using the results of the coarse
mesh solution. Hence, the accuracy requirement can be satisfied as well.
tangent at Yi
4 Examples
Two examples are examined in this section to check the feasibility of the above estimation.
First consider a fixed-ended beam with stochastic flexural rigidity EI(x). This beam
is subject to a distributed load with stochastic intensity W(x), as shown in Fig. 3.
Both EI(x) and W(x) are modeled as homogeneous Gaussian processes. The mean and
coefficient of variation of E I(x) are 1,125,000 k- It 2 and 0.20, respectively, and the mean
and coefficient of variation ofW(x) are 8.0 kilt and 0.30, respectively. The autocorrelation
coefficient functions are assumed to be of the form
232
PEl EI(~X)
,
= exp ( I~XI)
-- -
0~5L
(24)
pww(~X)
,
= exp ( --I~XI)
-
0.25L
(25)
where ~x is the distance between two points, and L = 32 It is the beam span.
W(x}
tl--=-* (I):-G:tr(_/~/_*_t-1-x
EI(x)
32 ft ~I
The beam is modeled with 32 uniform finite elements. It is considered failed if either
the midspan deflection exceeds 0.6 in, or the left-end moment exceeds 1,100 k - It. This
example has been adopted by Der Kiureghian and Ke [2] to illustrate the convergence of
{3 with the number of random field elements.
For the displacement limit state, the reliability indices for the cases with 1 and 32
random field elements are 2.643 and 4.162, and the corresponding failure probabilities
are 4.15 X 10- 3 and 1.60 x 10- 5 , respectively. Obviously, the random field mesh with a
single element is not acceptable. However, using Eq. 23 to update its results yields a
reliability index of 4.188 and a failure probability 1.40 X 10- 5 • These estimate values are
233
4.5
QQ-O---~-------- ---
4.0
x (a) displacement limit state
CD
-g 3.5
+ Pcomputed
~ 3.0 o Pestimate
...
CD
P32-element
2.5
3.0
2.8
x -g.-e---~--------
~ 2.6
c:
>. (b) moment limit state
~ 2.4
:ca:I
~ 2.2
2.0
very close to the values obtained by solving the refined mesh problem. The estimations
are also good for the other discretizations and the moment limit state. The estimation
errors in all cases are less than 1.5 %.
dxTdX)
PE,E(dX) = exp ( - (0.25L)2 (26)
where .dx is the position vector between two points, and L = 16 is the width of the plate.
The Poisson ratio is deterministic and is equal to 0.2. No units are specified for the above
quantities. One may assign appropriate units to these quantities as needed.
Fy
/ node 81 ' - .
~--~~~--~~~~~~-.~
1 1 1 1 1 1 1
1 1 1 1 1 1 1
--", - - -1- - -I- - --1- __ I- _ -.j. __ -1 __ _
1 1 1 1 1 1 1
1 1 1 1 1 1 1
---:---t---:--+--t---:---t--
1 1 1 1 1 1 1
1 1 1 1 1 1 1
- -1" - - -1-- - r - -"1- --r- -- 1" - -"1- - -
1 1 1 1 1 1 1
1 1 1 1 1 1 1
16 --~---~--~--~---~--~--~---
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1
- --r --T1 --,---r--T--
1 1 1 1
-1-- 1
-r--
1 1 1 1 1 1 1
1
-- -+---1--
1
1
1
-"---1-
1
1
1
1
1 1 1
--I- - - + -- -1---
1 1 1
1 __ .11 __ J 1___ L
- __ L 1 __ .11 ___11___ L
1 __
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
element 1
16
The plate is discretized into 64 finite elements, as shown in Fig. 5. Two failure criteria
are considered: the excess of the horizontal displacement at node 81 above a threshold of
0.03 and the excess of the tensile principal stress at element 1 above a threshold of 400.
Six discretizations of the random field in the plate are considered, containing 1, 4,
16, 19, 28, and 64 random field elements, respectively (see Fig. 6). These random field
meshes were constructed in a way similar to the refining process of the finite element
mesh. That is, the one-element mesh was constructed first. Then the mesh was refined
such that the previous mesh was contained in the refined mesh. This is required when we
examine the convergence of f3 with the increase of the number of random field elements.
The computed and estimate f3" for these meshes are shown in Fig. 7 for both the
displacement and the stress limit states. It is seen that in the displacement limit state,
f3 converges as the number of random field elements increases. In the stress limit state,
f3 also show signs of convergence, although it is not as smooth as in the displacement
limit state. This is because the stress in an element depends more on the local properties
around the element, and, thus, is more sensitive to the change of the random field mesh.
Consider the case with only one random field element. The reliability index is 1.405
for the displacement limit state, and 3.590 for the stress limit state. The respective f3's for
the 64-element case are 1.789 and 2.878. If Eq. 23 is applied to the I-element solution to
estimate f3 for the 64-element case, the reliability indices are 1.775 and 2.810, respectively,
for the displacement and stress limit states. These are, in fact, the worst cases among the
cases studied. If the coarse mesh contains more random field elements, the estimation
errors are even less, as shown in Fig. 7. Notice that the estimation is good even when
the failure probability is as high as 4.7% for the displacement limit-state.
A general approach is introduced in this paper to investigate the size effect of the random
field elements on finite element reliability analysis. An expression is derived to estimate
the reliability index of a continuum with refined random field mesh using the results of
the coarse mesh solution. The estimation is justified by a geometrical interpretation and
from an optimization point of view. The beam and the plate examples illustrate the
practicability of this approach.
236
2.0
(a) displacement limit state
x
Q)
1.8 cY -0- - - - - - -<>- ...-.---.-....- - - - - - - - - - -......
"0
c:
.~ 1.6 + ~computed
:.0
CIS o ~estimate
Q)
'- 1.4
~64-element
3.8
3.6
3.4 (b) stress limit state
x 3.2
Q)
"0
c: 3.0
>- --0------ ----~---------------
.-::: 2.8
:.0
CIS
Q)
2.6
.....
2.4
2.2
2.0 0
number of random field elements
References
[2] A. Der Kiureghian and B.-J. Ke, "The Stochastic Finite Element Method in Struc-
tural Reliability," Probabilistic Engineering Mechanics, vol. 3, no. 2, pp. 83-91, 1988.
[3] A. Der Kiureghian, H-Z. Lin, and S-J. Hwang, "Second-Order Reliability Approxi-
mations," Journal of Engineering Mechanics, vol. 113 no. 8, pp. 1208-1225, ASCE,
Aug. 1987.
[4] A. Der Kiureghian and P-L. Liu, "Structural Reliability Under Incomplete Proba-
bility Information," Journal of Engineering Mechanics, vo. 112, no. 1, pp. 721-740,
ASCE, Jan. 1986.
[6] A. M. Hasofer and N. C. Lind, "Exact and Invariant Second-Moment Code Format,"
Journal of Engineering Mechanics, vol. 100, no. 1, pp. 111-121, ASCE, Feb. 1974.
[7] T. Hisada and S. Nakagiri, "Role of the Stochastic Finite Element Method in Struc-
tural Safety and Reliability," in Proceedings, 4th International Conference on Struc-
tural Safety and Reliability pp. 385-394, Kobe, Japan, May 1985.
[10] P-L. Liu and A. Der Kiureghian, "Finite-Element Reliability Methods for Geometri-
cally Nonlinear Stochastic Structures," Report No. UCB!SEMM-89!05, Department
of Civil Engineering, Division of Structural Engineering, Mechanics, and Materials,
University of California Berkeley, CA, Jan. 1989.
[11] W. K. Liu, T. Belytschko, and A. Mani, "Random Field Finite Elements," Interna-
tional Journal for Numerical Methods in Engineering, vol. 23, pp. 1831-1845, 1986.
[12] A. Nataf, "Determination des Distribution dont les Marges Sont Donees," Comptes
Rendus de l'Academie des Sciences, vol. 225, pp. 42-43, Paris, 1962.
[13] R. Rackwitz and B. Fiessler, "Structural Reliability Under Combined Load Se-
quences," Computers and Structures, vol. 9, pp. 489-494, 1978.
239
[14] P. D. Spanos and R. Ghanem, "Stochastic Finite Element Expansion for Random
Media," Report NCEER-88-0005, March 1988.
Introduction
The problem of reliability-based optimum design of realistic structures requires the reso-
lution of several issues, some of which are as follows. (i) Most practical structures have compli-
cated configurations, and their analysis has to be done using computer-based numerical pro-
cedures such as finite element analysis; for such structures, the response is not available as a
closed-form expression in terms of the basic parameters. As a result, earlier methods of reliabil-
ity analysis and reliability-based design are not convenient for application to large structures.
(ii) Several stochastic parameters not only have random variation across samples but also
fluctuations in space; i.e., they may be regarded not simply as random variables, but as random
fields. This complicates the reliability analysis and subsequent design even further. (iii) The sto-
chastic design variables may have different types of distributions (several of them being Log-
normal, Type I Extreme Value etc.). Also, there may be statistical correlations among the design
variables. Such information has to be rationally incorporated in the optimization. (iv) Two types
of performance have to be addressed: one at the element level and the other at the system level.
Consideration of element level reliabilities in design optimization ensures a distribution of
weight such that there is uniform risk in the structure, whereas the consideration of system relia-
bility accounts for interacting failure modes and ensures overall safety. Therefore an optimiza-
tion algorithm that considers both types of reliability is desirable.
The Stochastic Finite Element Method (SFEM) appears capable of efficiently solving the
aforementioned problems. Given a probabilistic description of the basic parameters, SFEM is
able to compute the stochastic response of the structure in terms of either the response statistics
such as mean, variance, etc. or the probability of failure considering a particular limit state. This
is done by keeping account of the variation of the quantities computed at every step of the deter-
ministic analysis, in terms of the variation of the basic variables. This capability makes SFEM
attractive for application to reliability-based optimum design of large structures. Such an optim-
242
ization procedure is presented in this paper using SFEM-based reliability analysis, and illus-
trated with the help of a numerical example.
In the Advanced First Order Second Moment Method, a reliability index 13 is obtained as
13 = (y *t y *)1/2 where y * is the point of minimum distance from the origin to the limit state sur-
face G(Y) = 0, where Y is the vector of random variables of the structure transformed to the
space of reduced variables. In this formulation, G (Y) > 0 denotes the safe state, and G (Y) < 0
denotes the failure state. The probability of failure is estimated as PI = $(-13), where cp is the
cumulative distribution function of a standard normal variable. Earlier algorithms for reliability
analysis in this method solved the limit state equation at each iteration point to find 13, which
limited their application to simple problems where the limit state was available as a closed-form
expression in terms of the basic random variables. Alternatively, ope could use the recursive for-
mula of Rackwitz and Fiessler [1] to evaluate y *:
(1)
where VG(yj) is the gradient vector of the performance function at Yj, the checking point in the
ith iteration, and Clj = -VG(yj) I I VG(yj) I is the unit vector normal to the limit state surface
away from the origin. Since this method uses only the value and the gradient of the perfor-
mance function at any iteration point and does not require the explicit solution of the equation
G(yj) = 0, it can be used for structures whose performance function is not available in closed
form. While G(yi) is available from the usual structural analysis, VG(yj) is computed using
SFEM.
In SFEM, the computation of VG(yj) is achieved by using the chain rule of differentiation
[2], through the computation of partial derivatives of quantities such as the stiffness matrix,
nodal load vector, displacement-to-generalized response transformation matrix etc. with respect
to the random variables. This finally leads to the computation of the partial derivatives of the
response as well as of the limit state with respect to the basic random variables X, resulting in
the estimation of the failure probability. The detailed implementation of this approach is
described in [2,3]. Thus the first problem identified above in reliability-based optimization of
realistic structures is solved. The second problem - non-normality of some of the random vari-
ables - is handled by transforming all the random variables to equivalent normal variables.
This can be achieved in a general way using the Rosenblatt transformation [4], or specifically by
matching the probability density function (pdf) and the cumulative distribution function (cdf) of
243
the non-nonnal variable at each iteration point y. with those of an equivalent normal variable
[1].
Many structural parameters exhibit spatial fluctuation in addition to random variation
across samples. Examples of such parameters are distributed loads and material and sectional
properties that vary over the length of a beam, or over the surface of a plate etc. Such quantities
need to be expressed as random fields. In SFEM-based reliability analysis, these random fields
can be discretized into sets of correlated random variables [5]. However, this greatly increases
the size of the problem. To maintain computational efficiency, sensitivity analysis can be used to
measure the relative influence of the random variables on the reliability index; only those vari-
ables that have a high influence need to be considered as random fields [3]. In fact, the random-
ness in variables with very little influence may altogether be ignored in subsequent iterations of
the reliability analysis. Further, mesh refinement studies have been carried out to minimize the
number of discretized random variables to effectively represent the random fields [3]; this
further improves the computational efficiency.
Optimization Algorithm
Any optimization procedure has three aspects: objective function, constraints, and the
algorithm to search for the optimum solution. Different objective functions have been used in
reliability-based optimization studies in the past, such as minimization of weight (e.g. Moses
and Stevenson [6]), minimization of cost (e.g. Mau and Sexsmith [7] etc.), and minimization of
the probability of failure (e.g. Nakib and Frangopol [8]). Multi-objective, multi-constraint
optimization techniques have also been used (e.g. Frangopol [9]). In this paper, a simple and
convenient objective function, the minimization of the total weight of the structure, is chosen. It
will be apparent later that the choice of other objective functions mentioned above will not
affect the general applicability of the proposed method.
All the constraints to be considered here are related to the reliability of the structure. Two
types of reliability constraints can be used: component reliability and system reliability. The
fonner measures the reliability of the individual components corresponding to various limit
states while the latter accounts for simultaneously active individual failure modes and measures
an overall failure probability of the system. The use of component reliability closely resembles
the approach used in design offices, namely the proportioning of individual members based on
the forces acting on them. It also facilitates control at the element level, and helps to ensure uni-
fonn risk in the structure. The use of system reliability as the only constraint ensures overall
safety of the structure, but it is difficult to estimate for large, realistic structures with the present
state of the art, and may result in nonunifonn risk for different members. In this paper, the
optimum design is defined as that in which the reliability indices corresponding to all the
244
element-level limit states are within a desired narrow range. At each step, the system reliability
constraints are checked to make sure that the overall failure probability is less than the desired
value.
The element reliability constraints are written as
where the lower bound ~f specifies the minimum required safety level for the ith limit state,
while the upper bound M indicates the desired range of ~i' and m is the number of limit states.
The optimum design is said to be reached if all the ~i values fall within the desired range.
Some element-level reliability constraints may simply require the satisfaction of the limit
state equation at the nominal values, as in the case of code-specified serviceability criteria. Such
constraints may be written as
where gj is the performance function for the j such limit state and 1 is the number of such limit
states.
(4)
where PI is the overall failure probability of the structural system, which is required to be less
than an acceptable value pJ.
The reliability indices corresponding to all the element-level limit states are obtained
using the SFEM-based reliability analysis described earlier. The system reliability constraints
may be evaluated using any of the well-known methods [10]. Since system reliability is used
only to check the feasibility of a design, an approximate but fast method such as the use of
Cornell's upper bound may also be considered adequate.
The feasible region for the design is defined by Eq. (3) and by the lower bounds of Eq.
(2), indicating the acceptable level of risk for each element-level limit state, and by Eq. (4), indi-
cating the acceptable risk at the system level. Reliability-based design formats such as LRFD are
derived based upon this idea of acceptable risk; the load and resistance factors correspond to
prespecified target values of ~. Thus one may select the lower bounds of Eq. (2) same as the tar-
get ~ values used in reliability-based design codes. The lower bounds of Eq. (4) need to be
based on experience regarding acceptable level of system reliability. The upper bounds of ~ in
245
Eq. (2) are established such that the P values of different elements fall within a narrow range to
assure uniformity in the risk levels. Referring to Eq. (2), it can be seen that it is also possible to
specify different desired risk levels for different limit states, thus accounting for the fact that all
the limit states may not have equal importance.
Starting with a feasible trial structure, the algorithm achieves uniform risk within the
feasible region, by searching only in the direction of reducing P values. This means that the
algorithm needs to examine only those configurations whose member sizes are less than those of
the trial structure. Any movement produces a reduction in weight, resulting in minimum weight
for the optimum solution. If the new design still satisfies the lower bounds of Eqs. (2),(3) and
(4), it is accepted as a success; otherwise it is rejected as a failure and the step size is halved in
that direction until no significant improvement is possible.
The convergence of the algorithm is accelerated by using discrete step sizes which are
determined by different ranges in the values of (Pi - pb at any iteration. For example, one may
choose step sizes as
Such a method is easy and fast to implement; even though it is an approximate rule, it is
sufficient since the purpose of the algorithm is not to find an absolute optimum but only to
ensure that all the Pi values are within a desired range. Furthermore, it also allows the use of
different step sizes in different directions. The search is stopped when either all the P;'s are
within the desired range or when the smallest step size in every coordinate direction is smaller
than a prescribed tolerance level. Before beginning the optimum design algorithm, a feasibility
check may be made; if the trial structure is infeasible, then a feasible starting point may be
achieved by simply reversing the search directions and using only the lower bounds of the con-
straints.
Numerical Example
A steel portal frame, shown in Fig. 1, is subjected to a lateral load H and a uniformly dis-
tributed vertical load W. There are five basic random variables in this structure, whose statistical
description is given in Table 1. The design variables are the plastic section modulus (Z) of the
various members. The area (A) and moment of inertia (I) are related to Z through the following
expressions derived using regression analysis (refer to [11] for details):
246
Element-level Strength Limit States The following performance criterion (combined axial
compression and bending) is observed to be critical for all three members:
P C M
m S 1.0 (6)
Pu Mp (1 -PIPE)
where P is the applied axial load on the member, M is the applied bending moment, Puis the
ultimate axial load that can be supported by the member when no moment is applied, P E is the
Euler buckling load in the plane of M, Py = A Fy, where Fy is the yield strength, Mp = Z Fy is
the plastic moment capacity and Cm is as defined in the AISC LRFD Specifications [12]. For all
three members in the frame, Cm =0.85 is used. Of the two columns, the one on the right is
found to be critical.
,
-
w
+ ! ! ! ! ! ! ! ! !
2 2 3
H
22
15'
30'
The reliability constraint corresponding to this performance criterion for the critical column and
the beam is given by
(7)
Element-level serviceability limit states The design is also required to satisfy two servicea-
bility constraints. The limiting vertical deflection at the midspan of the beam = span/240, while
the limiting side-sway at the top of the frame = height/400. In the present example, it is required
for the sake of illustration that the serviceability limits be satisfied at the mean values of the ran-
dom variables; thus no reliability ranges are defined. Therefore these two constraints are written
as
System reliability The overall probability of plastic collapse of the frame is considered for
system reliability. This is computed as described in [13], considering ten possible collapse
modes of the frame and finding the probability that at least one of the ten possible collapse
modes will occur. Cornell's upper bound is used as an approximation for the system failure pro-
bability, for the sake of illustration. That is,
(10)
where PI is the overall failure probability, and Pli is the failure probability of the individual
mode i, and m is the number of failure modes. The corresponding system reliability constraint is
written as
PI S; 10-5 (11)
The step sizes for the optimization algorithm are as shown in Eq.(5). Mahadevan and Haldar
[11] discussed elsewhere in detail the practical implementation of the proposed optimization
procedure to satisfy the aforementioned constraints, and presented several strategies to improve
computational efficiency.
248
Conclusions
Acknowledgements
This paper is based upon work partly supported by the National Science Foundation
under Grants No. MSM-8352396, MSM-8544166, MSM-8746111, MSM-8842373, and MSM-
8896267. Financial support received from the American ~lStitute of Steel Construction, Chicago
is appreciated. Any opinions, findings, and conclusions or recommendations expressed in this
publication are those of the authors and do not necessarily reflect the views of the sponsors.
249
Plastic System
Iteration Section Element-level Collapse
No. Modulus (in3) p's andg's Probability Feasibility
= 132.0
Zl ~l =5.25
~= 132.0 ~2 =5.70 2.61 x 10-8 Yes
g3 =0.78
g4 =0.51
Zl =92.4 ~l = 3.91
2 Z2 = 132.0 ~2 =6.11 5.26 x 10-8 Yes
g3 =0.71
g4 = 0.43
Zl =92.4 ~l = 3.55
3 Z2 =92.4 ~=4.10 1.72 x 10-4 No
g3 =0.67
g4 =0.28
Zl =92.4 ~l = 3.72
4 ~ = 112.2 ~2 = 5.17 2.54 x 10.6 No
g3 =0.69
g4 = 0.37
Zl = 87.8 ~l = 3.53
5 ~= 112.2 ~2 =5.23 3.37 x 10-6 Yes
g3 =0.69
g4 =0.35
Zl = 87.8 ~l = 3.38
6 ~=95.4 ~2 =4.32 1.09 x 10-4 No
g3 =0.67
g4 =0.28
= 87.8
Zl ~l = 3.45
7 ~= 103.8 ~2 =4.79 1.86 x lQ-s No
g3 =0.67
g4 =0.32
References
1. Rackwitz, R. and Fiessler, B. Structural reliability under combined random load sequences,
Computers and Structures, Vol. 9, pp. 489-494,1978.
2. Der Kiureghian, A. and Ke, J.B. Finite element-based reliability analysis of framed struc-
tures, in Structural Safety and Reliability (Eel. Moan, T., Shinozuka, M. and Ang, A.H-S.)
pp. 1-395 to 1-404, Proceedings of the 4th Int. Conf. on Structural Safety and Reliability,
ICOSSAR'85, Kobe, Japan, 1985. Elsevier, Amsterdam, 1985.
3. Mahadevan, S. Stochastic Finite Element-Based Structural Reliability Analysis and Optimi-
zation. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, 1988.
4. Rosenblatt, M. Remarks on a Multivariate Transformation. Annals of Mathematical Statis-
tics, Vol. 23, No.3, pp. 470-472, 1952.
5. Vanmarcke, E.H., Shinozuka, M., Nakagiri, S., Schueller, G.I. and Grigoriu, M. Random
Fields and Stochastic Finite Elements, Structural Safety, Vol. 3, pp. 143-166, 1986.
6. Moses, F. and Stevenson, J.D. Reliability-Based Structural Design, Journal of the Struc-
tural Division, ASCE, Vol. 96, No. STI, pp. 221-244,1970.
7. Mau, S-T. and Sexsmith, R.G. Minimum Expected Cost Optimization, Journal of the Struc-
tural Division, ASCE, Vol. 98, No. ST9, pp. 2043-2058, 1972.
8. Nakib, R. and Frangopol, D.M. Reliability-Based Analysis and Optimization of Ductile
Structural Systems, Structural Research Series 8501, Department of Civil, Environmental
and Architectural Engineering, University of Colorado, Boulder, 1985.
9. Frangopol, D.M. Multicriteria Reliability-Based Structural Optimization, Structural Safety,
Vol. 3, pp. 154-159, 1985.
10. Thoft-Christensen, P. and Murotsu, Y. Application of Structural Systems Reliability
Theory. Springer-Verlag, Berlin, 1986.
11. Mahadevan, S. and Haldar, A. Efficient Algorithm for Stochastic Structural Optimization,
Journal of Structural Engineering, ASCE, Vol. 115, No.7, pp. 1579-1598, July 1989.
12. American Institute of Steel Construction. Manual of steel construction: load and resistance
factor design. Chicago, 1986.
13. Frangopol, D.M. Computer-Automated Sensitivity Analysis in Reliability-Based Plastic
Design, Computers and Structures, Vol. 22, No. I, pp. 63-75,1986.
14. Mahadevan, S. and Haldar, A. Stochastic Finite Element-Based Optimum Design of Large
Structures, in Computer-Aided Optimum Design of Structures: Applications (Eels. Brebbia,
C.A. and Hernandez, S.), Computational Mechanics Publications, Southampton, pp. 265-
274,1989.
CLASSIFICATION AND ANALYSIS OF UNCERTAINTY
IN STRUCTURAL SYSTEMS
William Manners
Department of Engineering, University of Leicester
Leicester, LEI 7RH, U.K.
INTRODUCTION
Decisions about engineering design, construction, maintenance and
repair have to be taken in spite of a lack of complete knowledge about
all the factors that ought to influence such decisions. Often the only
information available which sheds any light on the magnitude of the un-
certainties present was created or collected for different purposes.
For instance, data on the strength of structures, or elements of
structures, is usually only available from tests made to determine or
validate design rules. In such experiments the aim is to cover as wide
a range of different examples as possible, with the result that there
is not the repetition of notionally identical tests that is the basis
of classical statistics. Nonetheless such data do contain information
relevant to the assessment of uncertainty.
provided that the function g(.) is such that increasing Z always en-
closes an increased volume of the ~-space. If the dependence involves
a variety of possible sequences of events, as in the case of partial
failures of a structural system, then, of course, the problem cannot
be expressed as simply as equation [11, but the same concepts are
still involved.
SOURCES OF UNCERTAINTY
MANUFACTURING PROCESSES
[2]
[3]
[4]
[5]
257
or [6]
Taking equation [6], assuming that the differences between the true
and nominal values of the variables are small gives -
_ gT(aT )
~
g(~~)
[7]
gT(,.N) 1 agT(x) T N
+ --- E (xi - xi)
g(~~) g(~~) i aX i
where the summation is over all the variables in ~l' ~2' ~3' and ~4'
and agT~xi is evaluated at nominal values of all the variables.
[8]
[9]
[10]
[ 11]
which is zero if
1 agT(x) 1 ~1)
[ 12]
gT(x N ) aX i g(~~) aX i
Hence if the available model g(.) represents the unknown, true behaviour
gT(.) so that this equation is satisfied, then the effect of the
variation between true and nominal values of ~l variables will dis-
appear from the model uncertainty.
T T T T T
z - ~ + g(~1) .. g (~ ) [13]
and hence
[14]
~T E gT(~) - g(~1)
T
+ 1: ai(x i
i
- mil
.. gT(m) - T
g(~1) + 1: ai(x i
i
N
- xi) + 1: ai(x i
i
N
- mil
[15]
showing that, as long as the linear relationships are valid, the vari-
ability in ~ can be attributed separately to the variability between
true and nominal values and the variability of nominal values within
the given population. In general, the greatest difficulty of such an
analysis is knowing whether there are any mixed populations in the
contractor-controlled ~3 variables. Normally it has to be assumed
that there are not; any that exist may then show themselves as anomalies
in the data and results. For the "ideal" case where the model is good
(i.e. ai = 0 for ~l) and where there are no mixed populations in ~3'
and remembering that x~ = mi for ~4 variables, then the only items con-
tributing to the second summation in equation [15] (i.e. the variations
between nominally identical cases) are the ~2 variables.
260
CONCLUSIONS
The uncertainties present in structural systems can be classified
and described either by their location in the formulation of a problem,
or by the source of the uncertainty.
REFERENCES
1. Der Kiureghian, A. "Measures of Structural Safety under Imperfect
States of Knowledge"., J.Struct.Eng., ASCE, 1989. 1119-1140.
Robert E. Melchers
Department of Civil Engineering and Surveying
The University of Newcastle, N.S.W., 2308, Australia
ABSTRACT
Conventional structural analysis deals with structures for which all loading is
assumed to be a function of one independent parameter, that is, the loading is
applied proportionally. In the context of structural reliability calculations, this
means that the analysis is "load path independent". Thus the limit states for the
structural system are assumed not to change as the loading is applied. This will be
assumed to be the case also in this paper.
First-Grder Second Moment framework exist (e.g .. Madsen and Zadek. 1987; Wen and
Chen. 1987. 1989). The present paper. based on a research report (Melchers. 1989).
describes a means of solving the problem using the recent generalisation of
directional simulation (Di tlevsen et. al.. 1989; Melchers. 1990). As in these
works. the problem is formulated in the hyper-polar space.
In the present paper the space of the load processes is used: the load variable
space has been previously used (e.g .. Schwarz. 1980; Lin and Corotis. 1985).
Consider a structural system acted upon by n load processes get) and having a
"failure" in the sequel) in the period (0. t L ) where tL is some design life. is given
(1)
where fX( ) is the known joint probability density function in X. The term Pf(~)
in all but elementary problems. Pf(~) is difficult to obtain; for such problems also
the integration in (1) is of high order and approximate techniques must be used (see.
e.g .. Melchers. 1987).
Let the components of an n-vector g. termed the load capaci ty of the structure.
describes. for fixed X =~. the critical limit state(s) for the structure. such that
when 9 >g the structure fails (see Figure 1). When 9 = g. 9 is on the limit state.
and therefore the ith limit state equation Gi = 0 yields immediately a functional
o V i (2)
263
B,
/ failure domain
9 >.B
realizations of
Gi(.!:>~) = 0
-------------1,_ ql
c=o
(3)
where Pf(E) is the probability that 9 >g and f R( ) is the joint p.d.f. of the
less than that of (1) - for which it is (m) - and in most real istic structural
reliability problems it is significantly less.
Consider now a hyper-polar co-ordinate system centred at some point C in the safe
domain. Apart from suggesting that the pOint be chosen to lie in the safe domain and
to expose as much of the (generally non-convex and piecewise) inner envelope of limit
state functions (for any given ~ = ~), there are no obvious criteria for selection.
not necessarily independent of A. Hence for given A = a the vector R has its
J fA(~) Js Pf(sl~)
unit
da (4)
sphere
For a given structural life (0 - t L ). the conditional failure probability Pf(sl~) can
be obtained from the well-known bound on the outcrossing rate v; of the vector
process get) out of the safe domain D: ~ Gi (g. ~) ~ O. Vi. Let individual
1
(5)
in which Pf(O) denotes the failure prohability at time t = O. This quantity can be
readily obtained using well known methods for time-invariant reliability and will not
be further considered in detail.
Expression (5) may be simplified in various ways (e.g.. Melchers. 1987). The
simplest. for problems with rare outcrossings. is
+ (6)
-To evaluate (6). the mean outcrossing rate v; for the domain D may be expressed as
d(AS) (7)
where v + (!. t L ) is the local outcrossing rate through the elementary domain boundary
AS at point R = ! on sn. obtained from the well known generalised Rice formula. or
(8)
where fg( ) is the p.d.f. of the vector load process. and ~( ) is the outward unit
(9)
In general. the unit normal vector n = {nil will vary with S s; it can be obtained
directly as
{~ (10)
To obtain the mean outcrossing rate for use in (6). expression (9) must be modified
to allow for the orientation of the limit state surface at S =s
(6a)
J fA(~)
-
J{Pf(£I~,
S
0) + tL E [~(£I~)·§( )]+ • f g(£) • s(n-l)
unit
sphere
(12)
fA(~)
} h:(~) ds (13)
and
267
(14)
where in (13) the samples are taken from hA(~)' an appropriately chosen importance
sampling p.d.f .• and in (14) correspondingly from hsIA(sl~) for the radial
direction s.
The above formulation requires the structural strength variation to be known along
any radial direction S =s for given A = a. For any individual limit state function
large. Accordingly. the central limit theorem might be invoked to argue that S is
approximately Gaussian. that is. it can be represented by its first two moments.
This approximation would be expected to improve wi th increased dimension of m.
provided a corresponding increase occurs in the number of Xi contributing to anyone
limi t state function. From (2). for any given resistance vector !: I~. the random
S S(~) I~ (15)
a
It then follows that as a first approximation the first two moments of S are given by
standard formulae for Gaussian variables (e.g .• Melchers. 1987). These require that
the limit state functions are explicit and differentiable. Once the moments are
known. the p.d. f. fS I~ ( ) can be immediately constructed. Some other approaches
In general. there wi 11 be more than one limi t state equation Gi ( ) = O. and their
distributions may overlap. This more general situation can be readily incorporated
in the above procedure; the principles are illustrated in Figure 3.
268
Effective
FS I ~
radial direction
Effective
fs I~
( 21 ir--1
1 1 r--T1 (1I1
The frame structure shown in Figure 4 is subject to three stationary Gaussian load
processes get) with mean vector and covariance function matrix
· [~l
0.5 peT)
2
~(s. t) a peT) o
o p(2T)
1
.g;;
T
so that. for example. for the first limit state equation. !! (1. O. 0) and
where
270
p"(O) (p(s - t»
with s - t =0 as required for i =j =1 corresponding to the only non-zero term. It
may be shown that the terms [ ]+ for the other three limit state expressions are
1/2 1/2
{ -p~O) } { -5/22;"(0) }
In the above. and for ~ • ~. analytical expressions were used for the components n i
of n. In the present example. with linear limit state functions. these expressions
are independent of X. the strength random variable. For more general situations.
some apprOXimation may be required to obtain n.
Some typical results extracted from Melchers (1989) are presented in Figure 5. The
resul ts for Pf(O) and u+ were obtained from the expectation forms (13) and (14)
respectively. In the simulation. hA was taken as uniform over the uni t sphere
centred at ~ and hsl~ was taken as a normal distribution with mean at the mean of
the cri tical limit state along s. given ~ = ~. The integration over s was done
Rather than centre the hyper polar co-ordinate system on ~. other possibilities have
been described for the time invariant case by Ditlevsen et. al. (1989). In certain
situations such possibilities can significantly improve sampling efficiency.
In the present formulation it is advantageous to scale the 9 space such that uQ. are
1
Finally it will be observed that (i) simulation is herein in the load process
space. and this is generally of much lower dimension than the space of 9 + ~
-2
-6 -6
p,(O)
-8
J -p"CO)
IAEAN VALUE ~X r.C:""'N""""'' l-UE'' '-'.'""X-' t -.
Jr--/:.0.4 ~0.8 lrr----60.4 ~0.8
0
log10 10 0 0109,0
~ ~
10- 2 -2
I "'EAN VALUE
Ir--IJ. 0.4
IlX
)t---i( 0.8
G--£I 0.6 0---0 1.0
10- 6
0.00 0.01 00' 0.04 0.05 0.06 0.07 0.08 0.09 ,,1
X
Gaussian space is not required to be performed; (iii) simulation for Pf{O) and for
v+ can be carried out concurrently. offering further numerical savings; and (iv)
the use of the (hyper) polar co-ordinate system obviates many of the difficul ties
occasioned by sampling in cartesian co-ordinates: in particular. points of maximum
likelihood. or so-called "design" or "checking" points need not be identified.
An outline has been given for the formulation in the load process space of the time
dependent structural reliability problem. This has as main advantage the
considerably reduced directional Monte-carlo sampling required. An example problem
was given.
272
DITLEVSEN. 0.. BJERAGER. P.. OLSEN. R. and HASOFER. A. M.• (1988). Directional
Simulation in Gaussian Processes. Prob. Engrg. Mech .• Vol. 3. No.4. pp. 207-217.
LIN. T.S. and OOROnS. R.B.. (1985). Reliability of Ductile Systems with Random
Strengths. J. Struct. Engrg.. A.S.C.E.. Vol. 111. No.6. pp. 1306-1325.
MADSEN. H.O. and ZADEH. M.. (1987). Reliability of Plates Under Combined Loading.
Proc. Marine Struct. ReI. Symp .• S.N.A.M.E .• Arlington. Virginia. pp. 185-191.
MEL<lIERS. R.E.. (1989). Load Space Formulation for Time Dependent Structural
Reliability. Research Report No. 044.11.1989. Department of Civil Engineering and
Surveying. The University of Newcastle.
VENEZIANO. D.• GRIGORIU. M. and OORNELL. C.A.. (1977). Vector Process Models for
Structural ReHabi 11 ty. J. Engg. Mech. Divn.. A.S.C.E.. Vol. 103. No. EM3.
pp. 441-446.
WEN. Y.K. and CHEN. H.C .• (1987). On Fast Integration for Time-Invariant Structural
Reliability. Prob. Engrg. Mech .• Vol. 2. No.3. pp. 156-162.
WEN. Y.K. and CHEN. H.C .• (1989). System Reliability Under Time Varying Loads. I.
J. Engrg. Mech.. A.S.C.E.. Vol. 115. No.4. pp. 808-823.
SOME STUDIES ON
AUTOMATIC GENERATION OF STRUCTURAL FAILURE MODES
ABSTRACT.
1. INTRODUCTION
and in some cases failure mode equations generated in this way are de-
pendent on the failure paths. This is not consistent with rigid-
plastic analysis. An attempt is also made in this paper to explain
such a discrepancy.
partial failure path. The probability P~:~q) of failure path rl~ r2~
... ~ r p is calculated as
p
p(p) =p[nF(~) 1
fp (q) i=l n (q) (1)
where
,,'-
Fr~(q)
)
is the failure event that element r~ fails at the i-th
(.1..) (1)
order of sequence. i.e .• Fr~(q) =(Zr~(q)~ 0). Superscript p denotes
the length of the failure path and q is used to denote a particular
(p)
failure path. When p<pq. pfP(q) is the probability of a partial
failure path while it is the probability of a complete failure path
for p=pq.
(p)
The failure probability pfP(q) is estimated by evaluating its
(p) (p)
lower and upper bounds. PfP(q)(L) and PfP(q)(U). For example. these
bounds are given by the following formulas [1]:
i
j=3
min(p(j-l)
fp (q) (U) ,
P[p(l)
rl (q)
np(j)
rj (q)
])} (6)
Eq.(5) needs the safety margins at all the failure stages, while
Eq. (6) uses only the safety margins at the first and last stages and
the upper bound of the preceding failure path probabilities.
There are too many failure paths in a redundant structure to gen-
erate all of them, which necessitates a procedure for selecting only
the probabilistically significant failure paths. Efficient methods by
using a branch-and-bound technique have been proposed[2,1], and the
algorithmic procedure for the original version is given as follows:
step 1 (initializing)
Set PfPM=O, Xc=¢ , Xt =¢ , and X=xo.
Step 2 (partitioning)
1.Proceed one failure stage by adding each of all the potential
failure elements to the specified partial failure path. The
276
Step 5 (terminating)
If X=¢ , i.e., there are no failure paths left for branching,
the search is terminated. If not, go to step 3-2 by selecting
the failure path Xs with the maximum upper-bound failure prob-
ability from the set of the failure paths with the largest
path-length in the set X.
to the set of all the potential failure paths for branching, i.e., the
set X. This is called a width-first branching rule. Fig. 1 il-
lustrates the two branching rules.
In order to calculate the upper bounds of the probability of oc-
currence for the failure paths, either Eq. (3) or Eq. (4) is applied
while the lower bound is evaluated with either Eq.(5) or Eq.(6).
Next, a numerical example is given to find dominant failure paths
by using the depth-first and width-first branching rules, respec-
tively. Consider a simple frame as shown in Fig. 2. It has eight
potential failure elements. Fig. 3 illustrates all the structural
failure modes formed with those failure elements. The numerical data
are listed in Table 1.
Following the branch-and-bound procedures described above, ele-
ments are selected as failed elements one by one until a dominant com-
plete failure path with maximum lower-bound probability of occurrence
is found. Fig. 4 shows the search tree using the depth-first branch-
ing rule. There are 27 branching stages in all. The complete failure
paths are attained at the branching stages 5, 6, 9, 10, 11, 20, 21,
and 24. Their lower and upper bounds on probabilities of occurrence
278
~ 5m-----")>!E--\<- 5m -7\
Fig. 2. A simple frame struct ure
MMMM
(3,5,6 ) (2,5,7 ) (3,5,7 ) (2,5,6 )
17171717
(1,3,6 ,8) (1,2,7 ,8) (1,3,7 ,8) (1,2,6 ,8)
Table 2. Selected dominant complete failure paths for the simple frame
structure
Failure 2 :; 4
stage
.-.
/ ..!)
I
//~)
/
If ~~)
8
i branching stage
Failure 2 3 4
stage
/T)
/ I/g;
I
I ' ~
I ! /,~)
1,1
/ /
1/
I I i
!
II /
/ I I
/ I I
III
jl.
1/ 1
/;I,1/
A
g!
'I
'/
ll
~
l--:.--'.!!)
1
i branching stage
are shown in Table 2. It is seen that the failure path with the maxi-
mum lower-bound probability is No.7. Fig. 5 shows the search tree
using the width-first branching rule. The search necessitates 22
branching stages, which is shorter than the depth-first branching.
The complete failure paths are formed at stages 11, 14, and 19. The
three paths are the same as the last three ones obtained from the
depth-first branching as shown in Table 2.
Comparing the two branching rules, it is observed that the width-
first branching rule finds the dominant complete failure path with
largest probability of occurrence earlier than the depth-first branch-
ing rule. As the first selected complete failure path in the width-
first branching rule has quite large probability of occurrence, many
partial failure paths are discarded. In Table 2, the structural
failure modes selected in the depth-first branching are (1,5,7,8) and
(2,5,7), while in the width-first branching only (2,5,7) is selected.
3. STRUCTURAL MECHANICS
R R
Rpr-----------------
o o
(a) Elasto-plastic. (b) Rigid-plastic.
R~+O.20R~-3.03S=O (7a)
R~+O.25R;-3.78S=O (7b)
R;+1.24R;-3.78S=O (7c)
284
20 20 ., 1 0.1
I"
150
S(>O)
R-+0.20R--3.03S<0
1 2 - 2
R--3.25S<0 (R~+R;-0.99R;<O)
1
(R~<2.92R2) R~+O.25R;-3.78S<0 _
n(Ri<1.51R;) 3
(Ri+R"2-0.99Rj>O)
Ri-0.20R~-3.03S<0
1
R;-l.llS<O (Ri-R~-O.99Rj<O)
(R~<0.34Ri)
R~+l.24Rj-3.78S<0 +
n(R~<O.52R~)
(Ri-R;-o. 99Rj>0) 23
R-+0.25R;-3.78S<0 +_
R;-2.16S<O (R~-R~-0.99R;<0) 31
(R;<O. 66R i) R;+1.24R;-3.78S<0 +
n(R;<1.94R~) (Ri- R;-0.99Rj>O) 3
Legend
Fig. 8 Failure paths and equations for the simple truss structure
285
REFERENCES
Abstract
Introduction
The statistical models for load and reSistance are based on the available traffic
surveys. tests data and analysis. Reliability is calculated for individual girders and
for bridges treated as structural systems. The main elements of the system are gird-
ers and a composite concrete slab.
Deteriorated girders reduce the reliability. The importance of various load and
reSistance parameters can be evaluated by sensitivity analysis. Sensitivity functions
.are developed for ,girders and brid,ges.
288
The major load components considered in this study are dead load, D, live load,
L, and dynamic load, I. Statistical parameters for bridge loads were recently evalu-
ated in conjunction with the development of the Ontario Highway Bridge Design Code
(OHBDC) (Nowak et al. 1990) and LRFD (Load and Resistance Factor Design) for
AASHTO (Nowak et al. 1989).
Live load is the effect of trucks moving on the bridge. The major parameters
which are fonsidered include truck weights and axle configurations, traffic volume,
multiple presence (more than one truck simultaneously on the bridge), truck trans-
verse position and reference time period. The basis for derivation of the live load
model is truck survey data collected by the Ministry of Transportation Ontario (Agar-
wal and Wolkowicz 1976). The analysis was performed by Nowak and Hong (1990).
Survey truck data was used to develop distribution functions of moments for
various bridge spans. The results are plotted on normal probability paper in Fig. I,
for spans 3 to 30 m. The horizontal scale represents the calculated moment divided
by the design moment specified in the OHBDC (1983). Also shown are extrapolated
upper tails of the distributions. The surveyed trucks represented approximately a 2
week heavy traffic. The probability levels corresponding to 75 year life time, and
other shorter periods are shown in Fig. 1.
Maximum 75 year live load was derived by extrapolation of truck data and simu-
lations to model the multiple presence. The mean maximum moment was deter-
mined for a single lane and for two lane bridges. For a single lane and span up to
about 30-40 m, the load is governed by a single truck. For longer spans two trucks
produce the maximum moment. The results are shown in Fig. 2 for various time
periods. For two lanes, it was observed that two side-by-side trucks with fully corre-
lated loads govern. Each of the two trucks turned out to be the maximum monthly
truck. The maximum monthly moment is about 15 percent lower than the maximum
75 year moment.
289
6r---------------------------------------------~
50 Years 5 Years
5
1 Day
3
c
.2 3m
'5
.D 6m
~
iii 2
C 9m
co 12m
E
0
z
18m
~
co
"tl
C
co
a; 0
30m
"0
0
e
0
> ·1
.:
·2
·3r---~~--------------------------------------------~
4r-~--_,------~------~--~--~--T---._~--_,~~--~
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
Moment Ratio
1.4r-------~------------------------------------ ___
1.2
1.0
~
Ii
a:
C 0.8
«>
E
0
:0
c
.
«>
:0 0.0
0 10 20 30 40 50 60
Span (m)
0.10-,----------------------
Dynamic
.£
c Static
o
:+=
(J
Q)
'+-
Q) 0.05
o
c
..
C
Q.
(/)
:2
~
0.00~~~--._--~_.------4_-----r_~--_.--~---
The test results indicated that the mean I (as a fraction of the mean L) is
0.09-0.17. However. most of the larger values of I correspond to light trucks and very
little data is available for heavy trucks. Therefore. a numerical procedure was devel-
oped for simulation of dynamic interaction between bridge. truck and surface. It was
observed. that dynamic load is decreased for increased truck weights.
The maximum value of live load is caused by two trucks side-by-side. Dynamic
load resulting from two trucks is smaller than for one truck by about 20-25 percent.
In further calculations. the dynamic load applied to the mean maximum 75 year
moment is 0.13 m L for a single truck and 0.09 m L for two trucks. where m L = mean
live load. The coefficient of variation of I is 0.8.
I I
~.-------------------------,
W33x130 1800
3000
..,
W24x76
... 2SOO
1500
.Io! 2000
...c: 1500 _ _ _ an
•eo 1000
_aft
_ ...... t. deV. ~ . .aft-st. dey.
:c _ _ ...+st. dey.
de.,.
__
500 _+.t.
4800 r-------------------~~._--------------------~
7000
....,
-; 4000
6000
3200 5000
-- -...
~
--- _...
W36x210 4000 W36x300
...c: 2400
3000
•a0 1600
2000 _&ft-at. dey.
:c . . .n-st. dey. ~
de.,.
~
800 _ _ _... +st. de.,. _ _ . . .&ft+.t.
1000
0 O~--r--.--~~__~--~~--~
0.0000 0.0001 0.0002 0.0003 0.0004 0.0000 0.oa01 0.0002 0.0003 0.0004
Cw:vataze (Ra4./iA.' Cw:vat~ (Ra4./ill.,
mate moment is 1.05-1.06, and the coefficient ofvartation is V =0.10-0.105. For the
yield moment the mean-to-nom1nalis 1.01-1.03, with V =0.11.
Bridge resistance is defined In terms of the ultimate truck weight. The numeri-
cal value depends on truck configuration (axle spacing, axle load ratio), truck position
Uongitudinal and transverse) and multiple-presence (number of trucks on the bridge).
The most common trucks on American highways are single vehicles (3 axle trucks)
and semi-trailers (5 axle trucks), as shown In Fig. 6. Various combinations of these
two vehicles are considered to determine the critical load. For each combination, axle
loads are increased gradually until the deformations exceed acceptable limits. The
deformations are measured in terms of the maximum grider deflection. For 18 m
span bridge, the resulting load-deflection curves are plotted In Fig. 7 for a single
truck and a semi-trailer. The first plastic hinge is formed in one of the girders at
about 65 percent of the ultimate load.
ReliabWty Analysis
Reliability indices are calculated for girders and bridge systems using Rackwitz
and Fiessler procedure (1978). Umit state fi,mction is
g=R-Q (1)
where R = reSistance and Q =total load effect. Q is treated as a normal variable, and
R as lognormal.
(2)
Two major North American bridge design codes are considered, AASHTO Specifi-
cations (1989) and Ontario Highway Bridge Design Code (OHBDC 1983). AASHTO
design equation is,
1.3 D + 2.17 (L + n < q, R (3)
where cp = resistance factor (1.0 for steel). OHBDC (1983) load and resistance factors
are,
800
600
'Vi'
Po.
;g
"g 400
.9
..>0:
u
-
~ 200 ----m-- Single Truck
Semi-trailer
1 kip 0.448 kN
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
6 r-------------------------~
5
OHBDC (1983)
>C
~9)
.: 4
c
~ 3
..
:0
.!! 2
a:
OL-~L- __~~~~~~~~~~~
o 10 20 30 40 50 60 70
Span (m)
System reliability indices are calculated for a typical bridge with five girders,
spaced at 2.4 m. Spans from 12 to 30 m are conSidered. System reliability is
strongly related to the girder reliability level. The relationship between the girder
reliability and system reliability indices is shown in Fig. 9. The effect of correlation
between resistances of girders is also conSidered. Two extreme cases are full correla-
tion and no correlation. For 18 m span bridge, the results are plotted in Fig. 10.
Sensitivity Analysis
Sensitivity functions relate the reliability index with realization of selected par-
ameters, such as material properties, dimensions or load components. Parameters
conSidered include steel yield stress, Fy' plastic section modulus, Z (compact sections
are conSidered), slab concrete strength, fc', slab thickness, t s ' slab effective width, b,
dead load, D, live load, L, and dynamic load, 1. In the sensitivity analysis for the
bridge, reliability of a girder, or a group of girders, is also considered as a parameter.
The results of analysis performed for composite girders are shown in Fig. 11 and
12, for 12 and 30 m spans, respectively. The most important parameters are Fy and
Z, while parameters related to concrete slab have a limited effect on reliability.
• o·
~
., .-
III
CIS
~ 6
e<:: III
Y 4 III
'"» III Span =40 ft.
~
e· • Span =60 ft .
:g~ 2
• • Span =80ft.
Span = l00fL
.
~ 0
0
0 2 3 4 5 6 7
1 ft = 0.305 m
Fig. 9 System Reliability vs. Girder Reliability for Various Spans.
9
~ 8 p=O
] 7 P =1
e-
S 6
~
:= 5
~
4
Y
~
3
~
2
~
:g p = Coefficient of Correlation.
~
0
0 2 3 4 5 6 7
1.0
-g
fc b
j~ 0.8
I
:5;
D
IS
0:= 0.6
-.~
L
1~ 0.4
~~
2-e0
o·~ 0.2
Z
Fy
0
·0
~ 0.0
0 10 20 30 40 50
Percent Change from Nominal
1.0
1 0.8
j~
c..s 0.6
::::>~
o·~
-:;::l 0.4
1~
~~
0°
0.2
'a] 0.0
0 0
·0
~ ·0.2
0 10 20 30 40 50
Percent Change from Nominal
I· 34 ft. .j
~ ~
right lane left lane
ICD I0 I0 I0 I0
1 ft. 4 @ 8 ft. = 32ft. I ft.
1 ft 0.305 m
Fig. 13 Cross Section of the Considered Bridge.
~ 6
.g 5
==
~ 4
~
e
£til
3
....
en 2
~
:g
a:l
0
0 10 20 30 40 50
~~------------------------------,
_ Girder 1 or 5
o 10 20 30 40 50 60 70 80
e
£
'">OK
CJ:lo
cOO
.~ c
40
35
30
25
--- Girders 2. 3 & 4
Girders 1 & 2
Girders 1 & 5
0-
~g .~
20
~~
o:=:: 15
c:~ 10
~ 5
0
0 10 20 30 40 50 60 70 80
The results indicate that exterior girders are more important from the system
reliability point of view. System reliability is not very sensitive to the reliability of an
individual girder. A 70 percent reduction in grider reliability causes a 15 drop in the
system reliability. However, if two or more adjacent girders are subjected to a loss of
reliability, then the system relibility is considerably reduced.
Conclusions
Acknowledgements
The presented research was partially supported by the National Science Founda-
tion, under Grant No. MSME-8715496, with Program Director Kenneth Chong, which
is gratefully acknowledged.
References
Agarwal, A. C. and Wolkowicz, M., 1976, "Interim Report on 1975 Commercial Vehicle
Survey" Research and Development Division, Ministry of Transportation and
Communications, Downsview, Ontario.
Billing. J. R., 1984. "Dynamic Loading and Testing of Bridges in Ontario," Canadian
Journal of Civil Engineering, Vol. 11, No.4, December, pp. 833-843.
301
Hwang, E-S. and Nowak, AS., 1990, "Dynamic Analysis of Girder Bridges", Trans-
portation Research Record No. 1223, Washington, DC., pp. 85-92.
Hwang, E-S. and Nowak, AS., 1990, "Simulation of Dynamic Load for Bridges",
ASCE, Journal of Structural Engineering, submitted.
Nowak, A S., Hong, Y-K. and Hwang, E-S., 1990, "Calculation of Load and Resistance
Factors for OHBDC 1990", Report UMCE 90-06, Department of Civil Engineer-
ing, University of Michigan, Ann Arbor, MI.
Nowak, A S. and Hong, Y-K., 1990, "Bridge Live Load Models," ASCE, Journal of
Structural Engineering, Submitted.
Nowak, A S., Hong, Y-K, Tabsh, S. W., Hwang, E-S., Abi-Nassif, H. and Ting, S-C.,
1989, "Calibration Task Group," Report UMCE 89-17, Department of Civil Engi-
neering, University of Michigan, Ann Arbor, MI.
OHBDC, 1983, "Ontario Highway Bridge Design Code", Ministry of Transportation
Ontario, Downsview, Ontario, Canada.
Rackwitz, R. and Fiessler, B., 1978, "Structural Reliability Under Combined Random
Load Sequences", Computer and Structures, No.9, pp. 484-494.
Tabsh, S. W. and Nowak, AS., 1990, "Reliability of Highway Bridges," ASCE, Journal
of Structural Engineering, tentatively accepted.
Tabsh, S. W., 1989, "Reliability-Based Sensitivity Analysis of Girder Bridges", Ph.D.
Thesis, Department of Civil Engineering, University of Michigan, Ann Arbor, MI.
Tantawi, H. M., 1986, "Ultimate Strength of Highway Girder Bridges", Ph.D. Thesis,
Department of Civil Engineering, The University of Michigan, Ann Arbor, MI.
Zhou, J-H. and Nowak, AS., 1988, "Integration Formulas to Evaluate Functions of
Random Variables", Structural Safety Journal, No.5, pp. 267-284.
LONG-TERM RELIABILITY OF A JACKUP-PLATFORM FOUNDATION
Knut O. Ronold
Det norske Veritas, P. O. Box 300, N-1322 Hf6vik, Norway
ABSTRACT
A probabilistic model for analysis of the foundation stability of a jack-up platform foundation is
presented. A jack-up platform with three independent legs supported by individual spudcan footings
is considered as an example, and a two-dimensional representation of this platform is adopted. Con-
ventional bearing capacity failure as well as horizontal sliding of one of the spudcan footings are
considered. Uncertainties in wave and wind loading as well as in soil strength properties are
included. The long-term reliability of the foundation is assessed by means of a nested application of
a first-order reliability method.
I. INTRODUCTION
Jack-up platforms are used offshore, mainly for short-term commissions in connection with
exploration and drilling for oil. Recently, also long-term applications of this mobile platform type
have become attractive as an alternative to use of fixed and more permanent platform types, for
example for oil production purposes on marginal fields.
A jack-up platform consists of a hull supported by 3 or 4 legs which are jacked down to touch
the seafloor when the platform is installed on an offshore site. Each platform leg is usually equipped
with a footing which forms a foundation for transfer of the platform forces to the foundation soils.
The footings may be so-called spudcans with a conical shape. During installation the spudcans will
penetrate the soil under the selfweight of the platform in combination with some temporarily applied
vertical preload. The final penetration depth of the spudcans and the corresponding final contact
area between the spudcan and the soil will be governed by equilibrium between the applied vertical
forces and the bearing capacity of the foundation soils.
During a storm the platform is subjected to wave forces and possibly current forces on the legs,
and wind forces on the hull. In the following, emphasis is given to the wave and wind loading.
Foundation failure under one or more platform footings is one of the most frequent causes for
jack-up platform accidents with consequences ranging from limited structural damage to capsize
and total platform loss. The environmental properties as well as the soil properties governing such a
failure are encumbered with uncertainties. A probabilistic approach to the problem is therefore
adopted, based on available analysis methods. The approach is described in this report, and its
practical application is demonstrated by presentation of an example case.
Emphasis is laid on presenting an illustrative example for application of reliability methods to
analysis of a jack-up platform foundation. Some simplifications are therefore made, first of all by
limiting the number of variables which are modeled as stochastic variables. Only the variables
which are expected to be most important, namely those pertaining to the wave and wind loading
and those pertaining to the soil, are modeled as stochastic variables. An example with all uncertain
variables modeled as stochastic variables would have become very comprehensive, and the computa-
tions would have become very time-consuming.
2. EXAMPLE PLATFORM
A typical 3-1egged jack-up platform is considered. The platform is presented in Figure 1 with
major geometrical data marked out. The example platform is founded on a sand in 70 meters of
water in the North Sea.
3. ENVnWNMENTAL CONDITIONS
A storm is assumed to consist of a stationary sea state of duration T Btorm =6 hours with
significant wave height Hs and corresponding mean zero-upcrossing period T z . The distribution of
the significant wave height Hs in a sea state of 6 hours duration is assumed to be a Weibull
304
DIRECTION OF
BARGE 0( LOADING
16 m
FRONT AFT
LEG LEGS
70 m
1/='1'"=
distribution, see Bitner-Gregersen and Haver (1989). According to this assumption the cumulative
distribution function for Hs is
(1)
where a=1.498, ,8=1.146 and -;-=0.679 for the considered North Sea location.
The significant wave height Hs in an arhitrary 6 hour sea state can then be expressed in
terms of a standardized normally distributed variable U 1
1» y-
1
(3)
where N =D IT. form denotes the total number of sea states during the operation of the platform at
the considered location. This is based on the assumption that the significant wave heights in the N
sea states are mutually independent.
The mean zero-upcrossing period T z is conditional on the significant wave height Hs. Accord-
ing to Bitner-Gregersen and Haver (1989), the distribution of T z conditional on Hs can be taken as
a lognormal distribution. The mean zero-upcrossing period can then be expressed in terms of a
standardized normally distributed variable U 2 as follows,
T z = exp(U 20"+P) (4)
where
P = 0.933-tO.578Hj·395 (5)
and
0' = 0.055-tO.336exp( -O.585Hs ) (6)
305
vq('1) = ~
21r (~rexp
Ao (-~!Ll
2 Ao (9)
for a Gaussian process. Hence, the cumulative distribution function for the maximum sea elevation
[J;;- [~ r""'ht lJ
becomes
Under the Gaussian assumption, the maximum wave height H max equals two times the maximum
r 1r
sea elevation and can now be expressed as
where U 3 is a standardized normally distributed variable which expresses the inherent uncertainty
of the maximum wave height.
The corresponding wave period T is assumed to follow a Longuet-Higgins distribution, see
Longuet-Higgins (1983), and can be expressed in terms of a standardized normally distributed vari-
able U 4 as follows
(12)
where
If- = >'2Ao_1 (13)
>.l
r
is a spectral bandwidth measure and where
C = (~ Hmax (14)
(15)
306
where F is the total horizontal force on a cylindrical vertical leg with diameter D., d is the water
depth, tJ is the sea elevation above mean sea water level, u is the water particle velocity due to the
wave, and is is the corresponding water particle acceleration. CD is a drag coefficient, and C1 is an
inertia coefficient. p is the unit weight of sea water.
It is noticed that F depends on the position of the wave relative to the platform legs. It is also
noticed that F is a static force, i.e. no dynamic amplification is included. This may be a somewhat
unconservative approximation, especially for a platform in 70 m of water as in the present case.
Current forces on the platform legs are taken into account according to Morison's equation
based on a current velocity
1
v =Vl(~)7+V2(~) (16)
where z is the distance above the sea floor, d is the total water depth, v 1 is the tidal current velo-
city at the still water level, and V2 is the wind generated current velocity at the still water level.
Deterministic values v 1- 0.85 ml s and v 2- 0.73 ml s are used in this analysis. Other current
models are available in Bitner-Gregersen and Haver (1989).
Wind forces on the hull are calculated according to a formula
(17)
where AhuU is the area exposed to wind and u is the averaged sustained wind speed acting on the
hull and calculated as
U = U 10 v'0.93-tO.007z (18)
where u 10 is the wind speed at 10 m height and z is the average height of the hull above sea level.
The wind speed u 10 at 10 m height is correlated with the significant wave height Hs. The dis-
tribution of the wind speed at 10 m height conditioned on the significant wave height Hs can be
r)
represented by a Weibull distribution,
where rand s are functions of hs, see Bitner-Gregersen and Haver (1989).
Based on this, U 10 can be expressed in terms of a standardized normally distributed variable
U 5 as follows
UI0 = s (-In(l-<PCU 5» r
1
(20)
where
r = 2.424-tO.233Hs1.12 (21)
s = 4.40+1.94Hs (22)
for the present location. The exposed wind area is AhuU =400 m 2, and its average height above sea
level is taken as z =28 m. Wind gust is not included.
Wind and current forces are assumed to act colinearly with the wave forces, and all forces in
the considered example are assumed to act in a vertical plane from the platform aft towards the
platform front. This representation of the loading is adopted to suit a two-dimensional frame model
for the jack-up structure, see below. This implies that the directionality of the loading is not
accounted for in this study. This is believed to be conservative.
The horizontal capacity of a spudcan footing may be found according to friction considerations
and will be proportional to the vertical force.
The vertical capacity of a spudcan footing will be dependent on the horizontal force and may be
calculated according to Brinch Hansen's formula for bearing capacity
Qv = (-}1BN,s,d,i,-i1Io'Nq sq dq iq )A (23)
where B is the diameter of the contact area A between the spudcan and the soil, and 1 is the sub-
merged weight of the soil. 1=10 kN 1m 3 is used in this analysis. s, d and i denote shape factors,
embedment depth factors and load inclination factors, respectively, see Det norske Veritas (1977).
Po' is the effective overburden pressure. The contact area A results from the vertical loading applied
during installation. This loading consists of the submerged selfweight of the platform, in this case
W=109 MN, and a temporary preload, Wp -120 MN.
N q and N, are bearing capacity factors which depend on the angle of friction I/> for the sand:
N = exp(7rtg I/»tg 2( !!....+1-) (24)
q 4 2
N') = 1.5(Nq -1)tg I/> (25)
The angle of friction I/> depends on the relative density Dr of the sand, and a model
I/> = a l-ta~r -l-f (26)
is chosen. A regression analysis of 14 observations of pairs (Dr, 1/» from drained triaxial tests on
sand specinlens in the laboratory yields the following statistical properties for a 10 a 2 and (,
E
1
r
laa2 1 [19.43]
25.76 D r
laa2 1 [1.593]
1
2.088 Pa=-O.9742 a,=1.35 (27)
(28)
where Fv is the actual vertical force and Qv is the available vertical capacity for the considered
footing. For a horizontal sliding type of failure the conventional factor of safety can be calculated as
QH
FSs =- (29)
FH
where FH is the actual horizontal force and QH is the available horizontal frictional capacity for the
considered footing.
308
A two-dimensional frame model is assumed for the jack·up platform with the hull and the legs
modeled as beams with some structural stiffness. The trussed platform legs are represented as cir-
cular cylinders with an equivalent diameter, and equivalent drag and inertia coefficients are applied
in the analysis. Three foundation springs, OOJTeSponding to three degrees of freedom, are modeled
at each spudcan footing, namely one for vertical translation, one for horizontal translation, and one
for rotation. The soil is generally non·linear, and this non·linearity is represented in the analysis by
modeling the springs as hyperbolic load·displacement relationships for the mentioned three degrees
of freedom, see Figure 2.
F
,
-----i~ ~,~~~ -----: ::== ==
- - - J_ -r"'------
,,
displace-
ment
At low load levels, initial foundation stifl'nesses are calculated based on solutions for surface
foundations on elastic halfspaces in dependence of an initial shear modulus G for the soil, see Gaze-
tas (1983). The hyperbolic load-displacement relationships are modeled with these initial elastic
stiffnesses and are uniquely defined by assuming horizontal load asymptotes equal to 110.95=1.052
times the respective capacities. G is a difficult property to assess, and it is usually encumbered with
a significant uncertainty. Very little data are available for establishing statistical properties for G.
Rather than applying a complicated model for G in dependence of the stress conditions in the soil,
such as the model described by Whitman (1974), a simplified model is adopted with G represented
by a normally distributed variable with mean value E[G]=161000 kPa and standard deviation
D [G ]=16000 kPa. This is used together with a deterministic Poisson's ratio v=0.3. The use of this
simple approach to the representation of the shear modulus of the soil is justified later.
The loading of the platform, consisting of selfweight and environmental forces, is transferred to
forces and moments at the three spudcan footings. The final distribution of vertical forces, horizon-
tal forces and moments between the three footings depends on the stiffness of the structure in con-
junction with the stiffness of the three foundations, expressed in terms of the degree of mobilization
of the foundation capacity at each footing. This distribution of forces and moments is found by an
iterative numerical approach similar to the procedure described by Arnesen et al. (1988).
For a wave of a certain height and period, the factor of safety for a footing for a considered
failure mode varies with the position of the wave as it passes by the platform, and the factor of
safety FS associated with the wave is hence chosen as the minimum value of the factor of safety
occurring during this passing. Failure occurs when FS is less or equal 1.0.
5. RELIABILITY ANALYSIS
Reliabilities are computed by a first-order reliability method as described in Madsen et al.
(1986). The input to a reliability analysis consists of a limit state function in terms of a set of basic
variables which consists of stochastic variables as well as deterministic parameters. The statistical
distributions of the stochastic basic variables are given in the above sections together with the
values of the deterministic parameters.
For a probabilistic analysis of the foundation stability of the considered example jack-up plat-
form the limit state function is chosen as
g (X) = FSB (X)-1.0 (30)
when a bearing capacity failure of the single front footing is the critical failure mode for the plat-
form foundation. FSB is the conventional factor of safety pertaining to this failure mode for the
front footing, calculated as described previously. X denotes the stochastic variables, which govern
309
It is, however, reasonable to expect that also other sea states than the most severe storm will
contribute to the probability of failure under the considered front leg footing during the period of
operation for the platform. This requires solution of a series system of all sea states during the
period of operation, in this case N =1460 sea states. The problem can practicably be solved by a pro-
cedure which involves nested applications of reliability analyses in an iterative procedure, see
Bjerager et aL (1988) and Wen and Chen (1987).
The stochastic variables X are divided in two groups, Y and Z. Z are the system variables, i.e.
in this case the soil strength and stiffness variables, which are the same during all sea states. Yare
the sea state variables, and they are assumed independent from one sea state to another.
A given outcome z of Z produces a conditional failure probability for the critical spudcan footing
in an arbitrary sea state
(32)
A conditional reliability index (38 corresponds to this probability and is found by a reliability analysis
where Y is modeled as stochastic variables and Z=z is fixed,
(38 =-4>-l(PFz (z» (33)
The N sea states are assumed to be independent, and when the probability is conditioned on
310
an outcome z of the system variables Z, the corresponding conditional safety margins for foundation
failure will be independent. The conditional probability of failure during the N sea states in the
period of operation can hence be calculated as
PFZ,N(z) = 1-(I-PFz (z»N (34)
The total probability of failure during the N sea states is found by integration over all possible
outcomeszofZ
P F = jPFz,N(z)fz(z)dz (35)
•
By introducing an auxiliary variable U which is standard normally distributed, this probability
can be rewritten as
(36)
see Bjerager et a1. (1988). Insertion of the expressions from Eqs. (33) and (34) in Eq. (36) yields
PF = P[U +4>-1(<II(.8s (Z»N):::;O] (37)
PF is then solved by a first-order reliability analysis under application of a limit state function
h = U +4>-1 ( <II(.8s (Z»N) (38)
such that
(39)
and the colTesponding unconditional long-term reliability index: is .8L =-4>-l(PF ).
U and Z are represented as stochastic basic variables, and.8L can be solved provided the par-
tial derivatives of .8s with respect to Z can be computed. These derivatives are equal to the
parametric sensitivity factors which can be obtained as byproducts of the first-order computation of
.8s as described above, see Madsen et a1. (1986). The procedure for solution of PF and.8L is hence a
nested application of first-order reliability analyses, and this procedure is iterative in the sense that
it has to be repeated until the conditional short-term reliability index:.8s is calculated for a fixed set
of the system variables Z=z equal to the design point Z=z· pertaining to .8L .
The conditional probability PFz(z) is here calculated by a first-order reliability method. Because
such a first-order result represents an approximation to the true probability it may be encuntbered
with an elTor. This elTor will in general be blown up systematically by the exponentiation to the
power N and will thus give an error in the probability PFZ,N(z). However, as long as P Fz (z)«I/N,
the relative error in PFZ,N(z) due to an elTor in PFz(z) will equal the relative elTor in PFz(z), because
PFZ,N(z) can then be approximated by NPFz(z). Otherwise the relative error in PFz,N(z) will be
larger.
The result obtained for the long-term probability PF by the nested reliability analysis will be
sufficiently accurate if the result for PFZ,N(z) is considered to be sufficiently accurate. For the
present example the accuracy in PFZ,N(Z) as determined by a first-order reliability method is con-
sidered to be sufficient because PFz (z)«lIN is fulfilled. Problems may exist where this condition is
not fulfilled, and more accurate methods than a first-order reliability method are then required for
calculation of PFz(z), These methods can be importance sampling methods, such as axis-orthogonal
simulation, see Bjerager (1989). For problems with a low dimension of the variable space in the con-
ditional reliability analysis, numerical integration of the probability PFz(z) may form an attractive
approach.
For the example problem the results of the nested long-term reliability analysis are shown in
Table 2.
It is noticed that the long-term reliability index: .8L =2.526 calculated by the nested reliability
analysis is only slightly less than the index: .8L =2.547 obtained from the conventional reliability
analysis. This result is very reasonable for a failure problem which is as dominated by the highest
wave during the time of operation as this problem is.
311
a dominating role, because the maximum wave load in a I-year time span as considered here is
much more uncertain than the maximum wave load in a typical lifetime of 50 years for a fixed
structure. An insignificant importance is found for the uncertainty in the wind loading conditioned
on the sea state. This may serve to justify the disregard of a wind gust superimposed on the
modeled average sustained wind. The insignificant role played by the uncertainty in the friction
angle is understandable, because the effect of the uncertainty in the friction angle is counteracted by
a corresponding adjustment of the contact area in the installation phase. Sensitivity factors /l' have
not been interpreted from the nested reliability analysis. However, it is reasonable to expect that the
sensitivity factors /l' resulting from the conventional reliability analysis are representative also for
the nested analysis, referring to the very close results obtained for the reliability from these two
analyses.
Some of the parameters governing the foundation stability of the considered jack-up platform
have not been modeled as random variables, but as constants. Some of the basic variables are
described by subjectively chosen distribution parameters rather than by statistical estimates of
mean values and standard deviations. It is therefore of interest to study the sensitivity in the relia-
bility index (3 to changes in these parameters. Such sensitivities are output from the analysis in
terms of {}(3/{}p, wherep is the parameters in question.
One basic variable which is described by a subjectively chosen set of mean value and standard
deviation is the relative density Dr which governs the friction angle ¢ of the foundation sand. In the
probabilistic analysis a mean value IlD, =0. 7 is used together with a standard deviation aD, =0.1. The
following sensitivities in the reliability index (3 to changes in these two distribution parameters are
computed,
~=---O.73 !l{}(3 =---0.14 (40)
ollD, UaD,
It appears that the reliability is not excessively sensitive to the choice of any of these two distribu-
tion parameters when considering that reasonable changes of their values lie within the range say
0.1-0.2.
The analysis is carried out for a period of operation of one year for the example platform at the
considered site. Other periods of operation may be of interest as well, and a series of additional ana-
lyses has therefore been carried out for other time spans than one year, i.e., both nested reliability
analyses with integration of probability contributions from all sea states and conventional reliability
analyses with probability contribution only from the worst 6-hour sea state. The results of these
additional analyses are shown in Figure 3 in terms of the reliability index (3 versus the period of
operation D . RELIABILITY
INDEX {1
l.O
2.5
... ,i~WORST CONVENTlONAL
NALYSIS
-- --
.... "" SEA STATE ONLY)
NESTED
'-,...
2.0 ~~ttYm STATES)
T PERIOD OF
or--"'--,;---r--.--...s--" gp~~~m~
Figure 3 Reliability index (3 vs. period of operation D
It appears from Figure 3 that even for longer periods of operation than one year, the worst 6-
hour sea state gives the major contribution to the total failure probability, as there is not much
difference between (3 computed by the nested analysis and (3 computed for the worst sea state only.
It is noticed, however, that this difference is seen to increase slightly for increasing period of opera-
tion, and it is verified that there is a correspondingly increasing difference in the failure probability.
This is a natural result since the longer the period of operation, the more sea states contribute to
the total failure probability in the nested analysis.
313
The analysis results presented above are all pertaining to analyses carried out for the con-
sidered most likely failure mode for the example platform foundation, namely a bearing capacity
type failure under the single front footing. As stated above, horizontal sliding of one or both of the
two aft footings may be an alternative failure mode, and fully analogue reliability analyses are
therefore carried out also for this failure mode. The result of the conventional reliability analysis,
where only probability contributions from the worst sea state in one year is considered, is a failure
probability PF =O.2467·10-3 with a corresponding reliability index .8L=3.484. The result of the
nested reliability analysis, where the probability distributions from all sea states in one year are
integrated, is a failure probability of PF =O.2571·lO-3 with a corresponding reliability index
.8L =3.473.
It is noticed that the long-term reliability index .8L as calculated by the nested reliability
analysis is just slightly less than the reliability index obtained from the conventional reliability
analysis, so also for horizontal sliding it can be concluded that the major contribution to the total
failure probability comes from the worst sea state during the period of operation for the platform. As
for the bearing capacity failure, the predominant uncertainty source is found to be the uncertainty
in the environmental loading. The reliability index in excess of 3 implies a much smaller probability
of failure for the horizontal sliding type of failure than for the bearing capacity type of failure, and
this index can be assessed to be acceptably high as structural codes generally result in designs with
a reliability index between 3 and 5, see Madsen et al. (1986). In this context it is worthwhile notic-
ing that the consequences of horizontal sliding of one of the platform footings are much more severe
than the consequences of a bearing capacity failure under such a footing. Whereas the bearing capa-
city type of failure is of a self-stabilizing nature with little or no consequence for the platform struc-
ture, horizontal sliding of a platform footing may expose the platform legs to excessive stresses and
result in structural damage or even capsize of the platform.
7. CONCLUSIONS
A procedure for calculation of the long-term reliability of a jack-up platform foundation has
been presented, based on deterministic methods for evaluation of foundation stability in conjunction
with a nested application of first-order reliability analyses. A probabilistic analysis of the foundation
stability of an example platform founded on a sand in 70 meters of water has been performed
according to this procedure. In this analysis uncertainties were included in sea state, wave and wind
properties as well as in soil strength and stiffness properties. Two possible failure modes were con-
sidered, namely a conventional bearing capacity failure of one platform footing, and a horizontal
sliding of such a footing. The former was found to have the highest probability of occurrence,
whereas the latter was assessed to be the most dangerous for the platform structure.
The uncertainty sources have been studied through an analysis of the output from the reliabil-
ity analysis. It is shown that the major uncertainty is due to uncertainty in the determination of
the environmental properties, whereas the uncertainty in the soil strength and stiffness properties is
of only minor importance.
The sensitivity in the calculated failure probability for changes in subjectively chosen distribu-
tion parameters has been interpreted for a few example parameters, thereby to assess the impor-
tance of making proper choices of parameters which are modeled as fixed values in the analysis.
Many properties govern the foundation stability of a jack-up platform, but only the sea state,
wave and wind properties and the soil strength properties have been modeled as random variables
in the presented example study, thereby to limit the number of stochastic variables and reduce the
computer time to an acceptable level. In a more comprehensive and more detailed probabilistic
analysis than the one presented here, it will be natural to model also other properties as random
variables, first of all the wind and current properties and the equivalent drag and inertia coefficients
for use in the load calculations for the platform legs. A stochastic representation of model uncertain-
ties related to failure modes and foundation spring models will also be of major interest in such an
extended analysis.
A three-dimensional platform model rather than the two-dimensional model used in this study
will allow for consideration of the directionality of the loading as well as the possibility of failure
under other footings than the single footings which are critical when the direction of the loading is
prescribed.
314
Close consideration should be given to the assumption of a Gaussian sea elevation process as a
basis for the the calculation of the wave loading. This is because recent research results indicate
that this assumption may not hold, even for such deep waters as considered here. A possibly non-
Gaussian sea elevation process will have impact on the wave height calculation as well as on the
sinusoidal wave assumption made for the force calculation.
Finally, dynamic amplification of the force response can be significant due to the large flexibil-
ity of the jack-up structure, especially in deep waters such as in the presented example, and should
therefore be included in a future analysis.
8. ACKNOWLEDGMENTS
This paper is based on work performed within the research program ''Reliability of Marine
Structures", which is supported by A.S Veritas Research, Saga Petroleum a.s., Statoil and Conoco
Norway Inc. This contribution is gratefully acknowledged. The opinions expressed in the paper are
those of the author and should not be construed as reflecting the views of the sponsoring companies.
9. REFERENCES
[1] Arnesen, K., Dahlberg, R., ~eey, H., and Carlsen, C.A., "Soil-Structure Interaction Aspects
for Jack-Up Platforms", Proceedings, 5th International Conference on Behaviour of Offshore
Structures, Trondheim, Norway, 1988.
[2] Bitner-Gregersen, E.M. and Haver, S., "Joint Long Term Description of Environmental
Parameters for Structural Response Calculation", Proceedings, 2nd International Workshop on
Wave Hindcasting and Forecasting, Vancouver, B.C., Canada, 1989.
[3] Bjerager, P., ''Probability Computation Methods in Structural and Mechanical Reliability", in
Computational Mechanics· of Probabilistic and Reliability Analysis, ed. W.K. Liu and T.
Belytschko, Elme Press International, Lausanne, Switzerland, 1989.
[4] Bjerager, P., LflSeth, R., WintersteiIi, S.R., and Cornell, C.A., ''Reliability Method for Marine
Structures under Multiple Environmental Load Processes", Proceedings, 5th International
Conference on Behaviour of Offshore Structures, Trondheim, Norway, 1988.
[5] Det norske Veritas, ''Rules for the Design, Construction and Inspection of Offshore Structures,
Appendix F, Foundations", Det norske Veritas, HIMk, Norway, 1977.
[6] Gazetas, G., "Analysis of Machine Foundation Vibrations: State of the Art", Soil Dynamics
and Earthquake Engineering, Vol. 2, No.1, 1983.
[7] Holm, C.A., Bjerager, P., and Madsen, H.O., "Long Term System Reliability of Offshore
Jacket Structures", Proceedings, 2nd IFIP Working Conference on Reliability and Optimization
of Structural Systems, ed. by P. Thoft-Christensen, London, England, Springer-Verlag, 1988.
[8] Longuet-Higgins, M.S., "On the Joint Distribution of Wave Periods and Amplitudes in a Ran-
dom Wave Field", Proceedings of the Royal Society of London, Vol. A389, pp. 241-258, 1983.
[9] Madsen, H.O., Krenk, S., and Lind, N.C., Methods of Structural Safety, Prentice Hall Inc.,
Englewood Cliffs, New Jersey, 1986.
[10] Ronold, K.O., ''Random Field Modeling of Foundation Failure Modes", Journal of Geotechnical
Engineering, ASCE, Vol. 116, No.4, pp. 554-570, 1990.
[11] Wen, Y.K. and Chen, C.H., "On Fast Integration for Time Variant Structural Reliability", Pro-
babilistic Engineering Mechanics, Vol. 2, No.3, pp. 156-162, 1987.
[12] Whitman, R.V., ''Representation of Soil-Structure Interaction for Offshore Gravity Structures",
Massachusetts Institute of Technology, 1974.
CONSTANT VERSUS TIME DEPENDENT SEISMIC DESIGN COEFFICIENTS
Abstract
We analyze two kinds of problem. In both we deal with structures whose design is
governed by earthquakes generated by a non-Poisson process. One problem
concerns structures designed in a supposedly optimal way but using a simplified
probability distribution of major-earthquake interoccurrence times. The second
problem consists in evaluating the expected loss caused by maintaining the design
coefficients constant in a building code intended to remain in effect over a given
number of years. The purpose of the first type of problem is to have bases for
selecting the simplified model. That of the second type is to guide in deciding
how often to change the coefficients in a building code.
We illustrate both kinds of problem through structures potentially subjected
mainly to subduction earthquakes from either a source not having had characteristic
earthquakes for several decades, or from one that produced such earthquakes four
years ago.
Interoccurrence times of characteristic events are assigned lognormal distri-
butions with uncertain parameters, based on a bayesian analysis. Other seisms are
taken as Poisson generated. We find that either a lognormal or an exponential dis-
tribution, both with parameters adjusted to give the exact answer about five years
from now, are adequate for the design of all structures to be built within the next
ten years. However, use of a Poisson model with the mean occurrence rate leads to
excessive losses.
Expected losses due to invariability of building-code coefficients are found
to increase practically with the square of the time during which the coefficients
remain constant.
Introduction
tions of interarrival times must be partly based on empirical data, they contain
uncertain parameters. Calculation of optimal parameters is then burdensome.
Hence the desirability for distributions with fixed parameters, provided they do
not entail too large errors.
Using worldwide data and those from Mexican subduction earthquakes, lara and
Rosenblueth (1988) found that a lognormal distribution of large-event interoccur-
rence times was most satisfactory. Its recurrence period had coefficient of varia-
tion 0.22. On Mexico City's soft clay the threat of a major earthquake from the
Guerrero gap will dominate design until the next such event. Taking into account
smaller earthquakes, the authors found that a lognormal distribution with fixed
parameters was satisfactory for structures to be built within ten years following
the 1985 shocks. They indicated that a constant hazard rate would be adequate when
events caused by several sources have comparable importance, as such events are
nearly independent (G Grandori, private communication, 1984). That study did not
recognize uncertainties in the attenuation and site effects nor in structural ca-
pacity.
Cornell and Winterstein (1988) have pointed out that a constant hazard func-
tion will be adequate when the recurrence period of structural damage or collapse
is very uncertain and there are several potential sources of significant quakes.
Here we examine the consequences of having bUilding-code design coefficients
remain in effect for several years. We also explore the effects of the time elapsed
since the last major earthquake, of the uncertainty in recurrence period attenua-
tion and structural capacity, and of the existence of many significant seismic
sources.
When society is the subject for whom we wish to optimize, we shall assume that,
locally, utility is linearly related with monetary gains and losses; we shall dis-
count future utilities to obtain their present values by multiplying them by exp
(-rtJ where t is time and r a constant discount rate. We shall take r=O. 05/yr.
This is consistent with the average rate in major monetary transactions within the
last several decades, after correcting for inflation.
A study of a few ten-story reinforced concrete structures (Vargas and lara,
1989) gave as initial cost
where Co is a constant, ~ is design base shear coefficient, a=O.05, b=l.l and c=1.4.
317
L (2)
these structures. L includes the expected direct and indirect economic aRd material
losses to society, what society loses because some buildings cease to function, or
to do so properly, and the value that society places on loss of life and limb and
on suffering. In eq 2, the term linear in Pz(z) accounts for direct economic
losses.
If t were known, the expected present value of earthquake losses at the time
of construction would be D=L exp [-~(t-to)l where to is the time elapsed between
the last characteristic earthquake and the structure's construction. Since t is
uncertain we write
D L f; fr(t)e-~(t-to) dt (4)
o
where
(5)
the probability that a new quake has not occurred up to time to.
318
d
dx (C + D) o (7)
Let this value be xo. The optimal utility is some constant minus C(xo)+D(xo ).
Consider a structure to be built at time t~, designed optimally using a
simplified distribution whose parameters have been adjusted to give the same
optimum design at some time to as provided by our present state of knowledge. If
the earthquake generating process is Poisson and the probability distribution of
the characteristic-earthquake magnitude is time independent, or if t~= to' then
there is no utility loss for any distribution that we assign the time between
characteristic earthquakes, nor if we use the compound model that synthesizes
our present state of knowledge.
The second problem concerns a building code that is to remain in effect during
ten years or unless a major earthquake occurs earlier. We take the construction
rate of each structural type as constant during this interval. We shall express
utili ties as their present value at the time t c that the code is approved,
shift the time origin by an amount t~ and translate to present values at t~. (This
entails negligible errors in view of the small probability that the next major
earthquake occur before the ten years are up.) In solving this second problem we
use only the compound model associated with our present state of knowledge.
We will now assume that losses not exceeding 2.5 x 10-3 C are negligible
o
--
when due to using a simplified probability distribution of the characteristic-
earthquakes interoccurrence times.
I
/
)
0.02
3 s/ES 4
Fig 1 Probability density Mction of major-earthquake Fig 2 Hazard function for lognormal distribution
interoccurrence times with O'lnS =0.39
319
Earthquake statistics
For our site there is one dominant source of earthquakes. Its characteristic
events occur at random time intervals, which we shall call S, having a bimodal
probability density function (Jara and Rosenblueth, 1988) (see fig 1). the density
can be regarded as the weighted sum of two unimodal ones, the first nearly uniform
(Hong, 1988); the second a lognormal distribution. The first distribution is of
interest for emergency decisions; the second one is more relevant in structural
design; we will confine our attention to it. Its hazard function is as in fig 2.
The slow decrease in fT(t) for large t/ES (ES=expected S) may be counterintuitive
and make the lognormal distribution suspect (Suzuki and Kiremidjian, 1988). How-
ever, our intuition does not go far in problems of this nature, so the hazard
function should perhaps descend for large t/ES; intuitively it should for
extremely long t. Also, in the applications that now interest us there is no case
in which t/ES approaches 2.5, so the question is purely academic.
This distribution has two parameters. We choose these to be the median mS
and the standard deviation ~lnS.
We are uncertain about major-event magnitudes, their relation to the peak ac-
celeration on hard ground and between this acceleration and what we may call effec-
tive spectral acceleration. To study this matter we shall resort to a model that
incorporates an estimate of the uncertainty in the relation between Hand Y and in
that between Y and the actual structural strength.
Based on a regression analysis of 16 subduction earthquakes Singh et at
(1987a) have proposed the relation
(10)
where si+1=t i _ 1-t i in years (fig 3). This expression is consistent with results
obtained from Hong's model through simulations. We thus deal with a "slip predict-
able" model (Shimazaki and Nakata, 1980; Kiremidjian and Anagnos, 1984). Empiri-
cally it was concluded that H could be assigned a gaussian distribution with
uHIS= 0.27.
8.6,------------------,,.....------,
7.6
For a lower limit to U lnZ we shall only take into account the uncertainties in
2 2 1/2
H and in the relation between Hand Ao: u lnZ>[(0.27/2.30) +0.37 I =0.4. Incorpo-
rating uncertainties in the relation between Ao and Y and in X we find that u lnZ
must also exceed (0.42+0.82 2)1/2=0.9, for individual variances were obtained exclu-
sively from samples. As a reasonable upper limit we shall adopt 1.3.
Bayes' theorem was applied twice to determine the probability distribution of
parameters in the lognormal distribution of interoccurrence times for each of sev-
eral Mexican subduction segments. The first step began with a rather diffuse
prior of the expectation and standard deviation of In S. This was based on world-
wide data on subduction earthquakes. Data in the first update were those for the
Mexican Pacific Coast. Data for a particular segment served to update the second
time.
321
We shall concentrate on the Michoacan segment, where the 1985 shocks originated,
and on the Guerrero gap, West of Acapulco, which has been quiescent since November
1911.
In addition to the major earthquakes generated at these segments, we shall
recognize shocks of smaller intensity in Mexico City to optimize design base shear
coefficients. These shocks come from many essentially independent sources. We
will treat them as though generated by a Poisson process. For each source we may
write the magnitude exceedance rate as proposed by Cornell and Vanmarcke
(1969) :
-1
A(M) A(M1 ) ( e -f3M-e -f3M)
u ( e -f3M1 -Mu ) (11 )
>.(x) (13)
>.(x) (14)
D
.+>.(x) L
Results
Consider the present (1989) situation on Mexico City soft clay. Design will be
governed by the threat of the next Guerrero gap earthquake. We find ES=56.5 yr.
The optimum x for structures built about five years from now (some 83 yr after the
last major event) is 0.212 when ~lnZ=1.3.
quakes dominate design. However, IT InZ=O. 4 does not appear in the table since
~0=O.05 and there is no need to design against earthquakes in this case.
5xIO"r---------r--------,---------,---------r--------,
Loss/Co
r Exponential distribution
, I
3xIO·~--~~-A---------4---------~--------r_--~~-1
IltI06r-------~--~~,~-_t--------_t----7S~~--------~
a 78 80 88
-8
Guerrero 0.4 0.066 Lognormal f(s)+ 4.0x10
-7
0.059 Exponential f(s)+ 1. Ox10
-4
0.065 8.15 3.4x10
-3
0.018· 1. 5x10
f(S)+ -6
1.3 0.212 Lognormal 4.0xlO
-6
0.061 Exponential f(s)+ 4.5x10
-4
0.060 8.15 1. 4x10
-2
0.018· 5.1x10
-3
Michoacan 1.3 0.085 Lognormal f(s)+ 1. 5x10
f(s)+ -3
0.011 Exponential 1. 6x10
-3
0.012 7.50 2.1x10
-2
0.020· 1. 3xlO
built during this period will be designed for the values of a:o we found in the
foregoing paragraphs associated with t ~5 yr. Total expected losses over the ten-
o
year period are given in table 2 in terms of ~Co=yearly value that would be built
of the structures of interest if they were not earthquake resistant. Losses were
computed using lognormal distributions with uncertain parameters for the interoc-
currence times. The structures we analyzed are representative of all structures
to be built during one year in the city and requiring seismic design. In gauging
the cost of keeping design coefficients constant, ~Co must therefore comprise all
such structures, so it amounts to several thousand times the cost of a single
structure.
Table 2 Code coefficients in effect for ten years
If design coefficients are optimal near the midpoint of the time span during
which the code is expected to be in effect, the total expected losses will vary
very nearly as the square of that time span. Compared with the results in table 2,
the cost of producing sets of design coefficients at intervals much shorter than
ten years is insignificant. The important utility loss would come from resistance
on the part of the profession, mostly due to a loss of credibility. Paradoxically
it seems reasonable to change the code coefficients every ten years or so, for the
conditions we live at present, under the threat of the Guerrero macroseism, while
if the main danger were from the remote Michoacan event the code ought perhaps to
change every four or five years. The alternative of having the building code spec-
ify time-dependent design coefficient does not seem realistic.
In all the foregoing computations the expected present value of losses due to
the Poisson-process background noise amounts to about 5% of the total. Since re-
sults could be affected by this ratio, computations were repeated with a ten-fold
increase in the Poisson-process losses. We arrived at the same conclusions.
The sensitivity of our results to the time of construction and to that of code
implementation will be more pronounced when and if times of earthquake occurrence
can be better foretold.
325
Concluding remarks
We have examined the choice of design coefficients when earthquakes that govern de-
sign are not generated by a Poisson process. Earthquakes that reach a given site
are idealized as consisting of two groups. The first comprises moderate-intensity
events. They include moderate-magnitude nearby seisms as well as characteristic
earthquakes from several distant sources. They are idealized as produced by a
Poisson process. The second group consists of intense characteristic quakes
generated at one source and having a time dependent occurrence rate.
For illustration we concentrate on structures on Mexico City soft clay, sub-
jected to background noise plus either the characteristic events from the Guerrero
gap and those from the Michoacan segment or only to the latter. Time selapsed (in
1989) since the last macroseism are 78 and 4 yr, respectively. The corresponding
recurrence periods are 56.5 and 50.0 yr. For each source we have taken a lower and
an upper limit of the overall standard deviation that includes uncertainties in the
relations between magnitude and peak ground acceleration on hard ground and between
the latter and the actual base shear coefficient as well as in structural
strength.
We consider two kinds of problem. The first concerns structures designed
optimally, not according to a code. We compare results of using either a fixed-
parameter lognormal or an exponential distribution of interarrival times of the
high intensity earthquakes, rather than a lognormal distribution with uncertain
parameters, which represents our present state of knowledge. Parameters of the
simplified probability distributions are adjusted so we get the correct design
base shear coefficients for structures to be built about five years hence. With
both simplified probability distributions we find that utility losses are insignif-
icant for structures to be built at any time within the next ten years. However,
using the mean occurrence rates, losses are excessive, even in the case of maximum
uncertainty.
The second kind of problem concerns a building code that will remain in effect
over a given number of years unless a major earthquake should strike earlier, and
then the code would be changed. We find out the implications of keeping the design
coefficients constant over this period. Using the same earthquake statistics and
parameters as for the first kind of problem we find the expected losses relative to
a code that would vary the design coefficients continuously and optimally as func-
tions of time of construction. These losses vary approximately as the square of
the time during which the code is expected to remain in force. Results should
be compared with the cost of implementing this sort of code and with that of chang-
ing it at more frequent adopting time intervals or time dependent design coeffi-
cients.
326
Ignoring the time elapsed since the last major earthquake is unacceptable in
any case, even with the largest uncertainty assigned to the relation between magni-
tude and structural response.
When and if we have better methods for foretelling occurrence times,
explicit consideration of the time dependence of design coefficients will acquire
paramount importance.
Acknowledgements
This paper is essentially based on Jara and Rosenblueth (1989). It was partially
sponsored by Mexico's Federal District Department and the National Council for
Science and Technology (CONACyT).
We are grateful to Mario Ordaz for his valuable contributions and constructive
criticism of the original manuscript.
References
Cornell, C A and Vanmarcke, E (1969), "The major influences on seismic risk", Proc
IV World Conference on Earthquake Engineering, Santiago de Chile, Chile
Esteva, Land Ruiz. S E (1989). "Seismic Failure rate of multistory frames". Jou:rnal
of Structural Engineering ASeE, Vol 115. 268-284
Hong, H P (1988), "Modelo de generacion de temblores de subduccion", Doctor's
thesis, Facu1tad de Ingenieria, UNAM, Mexico
Rosenblueth, E and Ordaz, M (1987), "Use of seismic data from similar regions",
Earth Engnrng Struct Dyn, Vol 15, 619-34
Shimazaki. K and Nakata. T (1980). "Time predictable recurrence for large earth-
quakes". Geophys Res Lett, Vol 7. 279-82.
327
1. INTRODUCTION
In this paper the problem of estimating the accumulated permanent displacements of an offshore
platform during one storm is considered. For dynamically sensitive structural systems subjected
to wave loads this problem is generally very difficult. However, for dynamic insensitive systems
some methods/experience related to permanent deformations are described in Grinda et al. [1]
and Papadrakakis & Loukakis [2]. For general dynamic systems modelled by one-degree-of-freedom
(and two-degrees-of freedom) systems a number of methods exist, see e.g. Nielsen et al. [3] and
Toro & Cornell [4]. However, for multi-degrees-of-freedom systems very little work (with practical
relevance) has been done.
For steel jacket platforms subjected to wave, wind and current loads with specified main directions
three methods to estimate the permanent displacements during a single storm are proposed, namely
• a simulation approach
• a differential equation approach
• a superposition approach - the simple method
These three approaches are described in S¢rensen & Thoft-Christensen [5], S¢rensen et aL [6] and
S¢rensen & Thoft-Christensen [7]. It is assumed that the structural system can be modelled by a
multi-linear elastic-plastic system and that the loading can be modelled by a stationary Gaussian
Markov process.
In the simulation approach realisations of the load are generated and the permanent displacements
are determined by elastic-plastic analysis of the structural system, see section 2. In the differential
equation approach a system of differential equations is formulated from which the expected value
and the standard deviation of the response (e.g. permanent displacements) can be determined as
a function of the time. Numerical techniques and approximations to solve the system of equations
are discussed in section 3.
These two approaches require a very large number of computer calculations. Therefore a rather
simple method is proposed. The basic idea in the superposition approach is to estimate the
accumulated permanent displacements as sums of permanent displacements from single waves
(this assumption is equivalent to that used in Miner's rule for fatigue analysis). It is described
how a single storm can be broken down into a number of single waves and how the permanent
displacements for each single wave can be determined. Further it is described how the reliability
of the structural system can be estimated.
The three approaches are compared on a qualitative level. Numerical tests are currently being
performed using simple models of offshore platforms. The results of this testing of the simple
method will be published later.
330
2. SIMULATION APPROACH
First the basic structural model is presented and next the modelling of the stochastic external
loading (wind, wave and current) is discussed.
The following basic assumptions are made:
• the structural system is modelled with straight two or three dimensional truss or beam
elements each with two nodes i and j
• the material is linear elastic - perfectly plastic
• the external loads are applied to the nodes
• second-order effects are neglected.
For a single element the stiffness equation on incremental form in local coordinates are
(1)
where
dR e is the increment in the element nodal forces Re (axial force, moments, etc.)
Fi (Re) =1 (2)
Fi < 1 indicates elastic state, Fi = 1 indicates plastic state and Fi > 1 is not possible. If the
flow rule (normality principle) is accepted then increments in the plastic displacements dU P can
be determined, see Bathe [8].
The time-dependent load on the jacket structure consists of wind, wave and current loading and
is modelled by a stochastic vector process (pet), t E [0, Tn where T is the duration of a storm
and Pi(t), i = 1, ... ,N models the load in the ith degree of freedom. N is the total number of
global degrees of freedom in the finite element modelling of the structural system. The stochastic
processes {Pi(t)} , i = 1, ... ,N are assumed to be filtered Gaussian processes and are written
pet) = !
Po + C ((J(t)) (3)
(4)
where
where
331
The elements in the b and C matrices are determined so that the actual cross-spectral densities of
the load process are adapted as closely as possible to those of the Gaussian process {pet)} defined
by (3) - (4). If the load process mainly models wave forces determined by Morison's equation then
m = 1 can be expected to give reasonable results.
The permanent deformations during a single storm can be simulated by generating realisations of
{B(t)} and using a non-linear finite element program.
where X is the state vector of dimension N x • A(X,t) is the drift vector and D(t) of dimension
(N", x M) is the diffusion matrix.
Based on the modelling in section 2 the state vector is
(8)
where U is the global generalized displacement degrees of freedom (dimension N), U is the glo-
bal generalized velocities (dimension N), uP
is the permanent deformations of all element d.o.f.
assembled in one vector (dimension (N E x N DO F) where N E is the number of elements and
N DO F is the number of degrees of freedom in each element), He is the element nodal forces (in
the local element coordinate system) assembled in one vector (dimension (N E x N DOF» and Q
is an auxiliary vector (dimension M).
The dynamic behaviour of the structural system is assumed to be described by
=..::.. =-'- - - -P -
MU+CU+R(U,U )=p (9)
where M is the mass matrix (lumped or consistent) (dimension (N x N», C is the damping matrix
(dimension (N x N» and H is the non-linear restoring force vector (dimension N). If the structural
system is dynamically insensitive then the system is reduced by deleting the inertia and damping
terms.
332
The drift vector A (X, t) can now be written
HU
-p.
A(X,t) =
K U (10)
Q
Q
=-1 - _ =...!... = (n - 1)
- bn (bo Q + b1 Q + ... + bn - 1 -Q-)
where
R; is the local restoring force vector in element i.
T; is the tranformation matrix for element i from local to global coordinate system.
H is a matrix obtained by assembling the element H; matrices.
=p
K is the global modified stiffness matrix given by
(11)
- - =p
Since R;, H and K are non-linear functions of X the drift vector is non-linear. The time-
independent diffusion matrix D(t) is modelled as
(12)
N z =2N+2NE·NDOF+n·M (13)
Even for small structural systems the number of equations can be very large, typically larger than
2000.
The unknown joint density function fX<X, t) of the state vector can in principle be determined by
solving the associated Fokker-Planck-Kolmogorov equation. However, due to the non-linear drift
vector and the large number of equations this is impossible in practice, at least with the present
computer resources.
333
Instead, one can try to estimate the statistical joint moments of the state variables based on a
closed set of differential equations (see Nielsen et al. (3)). It follows from (3) that the expected
value and the covariances can be determined from
= ==:!I'
k;j = E[A; (Xj - E [Xj))) + E[Aj (X; - E[X;))) + DD ,i,j = 1, ... ,N" (15)
where
It; is the ith element of E [X)
E [.) is the expectation operation
/'i,;j is the i,jth element in the covariance matrix E [(X - E [X)) (X - E [X))T)
The initial values of Ii and K are assumed to be given
~(t = 0) = ~ (17)
If the joint density function of X is described completely by the second moment characteristics
then the system of equations (14) - (15) is closed and can be solved numerically.
As described above the structural elements are assumed to be elastic-plastic. This implies that
the nodal forces He will be bounded. Therefore, they will not be Gaussian distributed and the
Gaussian closure method can thus only be expected to give approximate results. The estimates of
the statistical joint moments can be improved by using other distribution functions. If these are
completely described by the second moment characteristics it is necessary to enlarge the system
of moment differential equations (14) - (15). This possibility is discussed in (3), but for practical
applications to large systems it is not possible to use that approach.
The total number of differential equations corresponding to (14) - (15) will generally be very large
but can be significantly reduced if some of the elements in the state vector are assumed to be
uncorrelated. Other simplifications can also be discussed. The main output from the solution of
(14) - (15) is the time-dependent expected values and covariances (variances) of the permanent
deformations of some critical points.
One solution possibility is to approximate the behaviour of the dynamic system using only a few
eigenmodes. Approximation of the response by the eigenmodes is well known in linear elastic
dynamic analysis. For elastic plastic structures this approach has only been investigated in a few
papers. In Baber (9) modal analysis is used to analyse the response of hysteretic frames. It is
concluded in (9) that the computational effort has decreased significantly, but that it is necessary
to include several modes in addition to the dominant modes corresponding to a linear analysis.
The main reason for this is that the system non-linearities cause interaction between the modal
responses. Further, Baber (9) concludes that elimination of too many modes may have the effect
that the iterations at each time step diverge.
Another possibility to reduce the number of equations in (14) - (15) is to identify the structural ele-
ments which can be expected to remain elastic (or which can be approximated by elastic elements).
The elements in uP
(and in X) coresponding to the elastic elements can then be deleted. Also the
corresponding elements in K can be deleted because the restoring forces in the elastic elements
334
can be determined directly from the nodal displacements. These approximations are generally
non-conservative because the permanent deformations in the elements are underestimated.
A third possibility to reduce the number of unknowns in (14) - (15) is to neglect the time variability
of some of the expectations or covariances, e.g. by assuming some of the variables fully correlated or
independent. The expectations and covariances (variances) of some of the permanent displacements
are the most interesting quantities. Therefore, it should be possible to neglect the time- dependence
of some of the other quantities.
where
Dc is the critical value of the accumulated permanent deformations,
ZD is a model uncertainty (stochastic) variable
During one storm the sea surface elevation is assumed to be modelled by a stationary Gaussian
stochastic process {1](t)} with zero mean and spectral density function S'1(w). S'1(w) is assumed to
be defined by the significant wave height Hs and the zero crossing period T z •
335
If T is the length of the storm then the expected total number of waves is TITz • To determine
Ni, i = 1, ... , NH (the number of waves with wave heights in the interval [Hi-I, HiD the density
function fH(h) of wave heights H is necessary, see figure 1.
17(t)
!Pv(p, v, r) =
where f"1"2~1~2;;';;2 is the joint density function of 771, 772, ~1~2' ~l and ~2. Further, '11 = 77(tl) and
772 = 77(t 2 ) = 77(tl + r). The derivatives ~ and ~ are assumed to exist.
In order to estimate the density function of the range between two successive extremes it is also
necessary to estimate the density function of the times Tl between successive extremes, fTl (t).
This first-passage problem cannot in general be solved analytically. A simple estimate of fTl (t)
can be obtained by using the upper bound (the crossing rate)
(20)
The approximation (20), which can be calculated numerically, is used in the interval 0 ~ t ~ To,
where To is determined from a normalization condition. fT,(t) = 0 when t > To.
Using (19) and (20) the density function of wave heights can now be determined
(21)
Ni = -T
Tz
l
H'_l
H
' fH(h)dh (22)
Consider the case where all quantities except the sea surface and the model uncertainty variables
D z are deterministic. The permanent displacements from a single wave (at the start of the storm)
with wave height t(Hi-l + Hi) can be determined using a non-linear finite element program and
336
a wave loading program, e.g. the RASOS program developed during the BRITE P1270 project,
see Gierlinski (11). The probability of failure can then be estimated.
Let the stochastic variables (for example yield stresses, model uncertainties, load parameters in
Morison's equation and quantities in the member model) be denoted Y = (Y1 , ••• , Yn). Then Di
will be a function of Y. If the stochastic variables Y also include quanti ties defining the sea surface
process (for example the significant wave height) then Ni will also be dependent on Y.
With only one failure element in the structural system the probability of failure can be determined
from
PI =P(M:::; 0)
NH
= PeDe - ZD LNi(y)Di(Y):::; 0)
i=1
= f NH
PeDe - ZD LNi(Y)Di(Y):::; 0 I Y
i=1
= ii)f¥(ii)dii
(23)
The conditional probability of failure PIW(ii) in (23) can be determined as described above. Nu-
merical determination of the multi-dimensional integral in (23) is generally very computer time
consuming, but the computational efforts can be reduced significantly by using the fast probability
integration technique based on FORM/SORM, see Wen & Chen (12). A systems reliability index
can be determined if the failure modes are modelled as elements in a series system.
T/I.lJ
,. t
In the above model no influence of the sequence effects is taken into account. In the following some
simple methods to take these effects into account are discussed. In figure 2 two possible groupings
337
of the waves are shown. The idea is to group all waves with wave height in one interval, e.g.
[Hi-I> Hi[ into one group of waves containing Ni waves. One "extreme" situation is first to consider
the smallest waves and next the increasing wave heights. Another "extreme" is first to consider
the largest waves and next the decreasing wave heights. For each of the two extreme situations
the permanent displacements are still estimated on the basis of single waves, but the accumulated
displacements at the end of one group of waves are used as input to the structural analysis of
the characteristic wave of the next block. These models will not increase the computational work
significantly.
When the largest waves are considered first an extension of the above method is to perform a
complete structural analysis corresponding to the whole group of the largest waves. The second
largest waves may also be analysed in the same way. The remaining waves are treated as above.
This method will increase the computational work significantly. However, it can be expected to
give a much better estimate of the accumulated permanent deformations.
The simple method is at present being tested at the University of Aalborg.
5. ACKNOWLEDGEMENTS
This paper represents part of the results of BRITE Project P1270, "Reliability Methods for Design and Operation
of Offshore Structures". This project is funded though a 50% contribution from the Directorate Genaral for Science,
Research and Developement of the Commission of the European Communities and another 50% contribution from
the partners and industrial partners. The partners in the project are: TNO (NL), d'Appolonia (I), RINA (I),
Snamprogetti (I), The University of Aalborg (DK), Imperial College (UK), Atkins ES (UK), Elf (F), Bureau Veritas
(F), CTICM (F) and IF REMER (F). The industrial sponsors in the project are: SHELL Research (NL), SHELL
UK (UK), Maersk Oil and Gas (DK), NUC (DK) and Ramb0l1 & Hannemann (DK).
6. CONCLUSION
Methods to estimate the permanent deformations during one storm are described, namely simula-
tion, a differential equation approach and the so-called simple method. In the differential equation
approach the second-order moments of the time-dependent behaviour of the permanent defor-
mations are estimated. The basic idea in the simple method is to consider single waves and to
accumulate linearly the permanent displacements from these. It is described how the magnitude
and number of single waves in one storm can be determined. The permanent displacements from
one single wave are assumed to be determined using a general non-linear finite element program.
Some extensions/improvements of the simple method taking into account the sequence effects are
discussed. One idea is to consider the groups of basic waves sequentially and to use the accumulated
permanent displacements after one group as starting values for the analysis of the basic wave which
represents the next group of waves.
The main drawbacks and advantages of the simulation approach are:
Drawbacks: • very computer time consuming and • expensive to include other stochastic variables
than those modelling the load process.
Advantages: • load process can be modelled rather precisely and • elastic plastic structural systems
can be modelled accurately.
The main drawbacks and advantages of the differential equations approach are :
Drawbacks: • large number of differential equations, • very computer time consuming, • expensive
to include other stochastic variables than those modelling the load process and • brittle structural
elements cannot be modelled.
338
Advantages: • load process can be modelled rather precisely, • elastic plastic structural systems
can be modelled accurately, • the differential equations which model the permanent deformations
are exact (with respect to the assumptions) and describe the time-dependent behaviour exactly
and • dynamic effects can be included.
The main drawbacks and advantages of the simple method are :
Drawbacks: • the estimates of the permanent displacements are generally rather inexact.
Advantages: • not very computer time consuming, • it is possible to include other stochastic vari-
ables than those modelling the load process using a FORM/SORM approach, • brittle structural
elements can be modelled and • elastic plastic structural systems can be modelled accurately.
Compared with the simple method to estimate the permanent displacements the differential equa-
tion approach has the advantage that sequence effects can be taken into account. Compared with
simulation the differential equation approach has the advantage that it is possible to incorporate it
in a FORM/SORM analysis. The main advantage of the simple method compared with the other
methods is that it is not very computer time consuming, i.e. it is practically applicable. However,
test of the accuracy of the simple method has not yet been finished.
7. REFERENCES
[1] Grinda, K.G., W.C. Clawson & C.D. Shinners: Large-Scale Ultimate Strength Testing of
Thbular K-braced Frames. OTC paper 5832, 1988, pp. 227-236.
[2] Papadrakakis, M. & K. Loukakis : Inelastic Cyclic Response of Restrained Imperfect Colo-
umns. ASCE, Journal of Engineering Mechanics, Vol. 114, No.2, 1988, pp. 295-313.
[3] Nielsen, S. R. K., K. J. M¢rk & P. Thoft-Christensen: Stochastic Response of Hysteretic
Systems. Structural Reliability Theory, Paper No. 39, The University of Aalborg, 1988. To
be published in Structural Safety.
[4] Toro, G.R. & C.A. Cornell: Extremes of Gaussian Processes with Bimodal Spectra. ASCE,
Journal of Engineering Mechanics, Vol. 112, No.5, 1986, pp. 465-484.
[5] S¢rensen, J.D. & P. Thoft-Christensen: Estimation of Permanent Displacements by Simu-
lation - Part II. Report B(III.2)3, BRITE Project P1270, University of Aalborg, 1988.
[6] S¢rensen, J.D., P. Thoft-Christensen & S.R.K. Nielsen: Estimation of Permanent Dis-
placements by Differential Equation Approach. Report B(III.2)4, BRITE Project P1270,
University of Aalborg, 1988.
[7] S¢rensen, J.D. & P. Thoft-Christensen: Estimation of Permanent Displacements by the
Simple Method. Report B(III.2)6, BRITE Project P1270, University of Aalborg, 1989.
[8] Bathe, K.-J. : Finite Element Procedures in Engineering Analysis. Prentice-Hall, 1982.
[9] Baber, T. T.: Modal Analysis for Random Vibration of Hysteretic Frames. Earthquake
Engineering and Structural Dynamics, Vol. 14, 1986, pp. 841- 859.
[10] Madsen, H. 0., S. Krenk & N. C. Lind: Methods of Structural Safety. Prentice-Hall, 1986.
[11] Gierlinski, J.T. : RASOS : Reliability Analysis System for Offshore Structures. BRITE
Project P1270, Atkins ES, 1990.
[12] Wen, Y. K. & H. C. Chen: On Fast Integration for Time Variant Structural Reliability.
Probabilistic Engineering Mechanics, Vol. 2, 1987, pp. 156-162.
RELIABILITY OF CURRENT STEEL BUILDING
DESIGNS FOR SEISMIC LOADS
ABSTRACT
This is a progress report of a research project which evaluates the performance and safety
of buildings designed according to the recently proposed procedures such as Uniform Building
Code (UBC) and Structural Engineers Association of California (SEAOC). The extensive results
from recent analytical studies of structural behavior, laboratory tests of structural and nonstructural
components and field ground motion and damage investigations form the data basis for this study.
State-of-the-art reliability methods are used. The study concentrates on low-to medium-rise steel
buildings. Both time history and random vibration methods are used for the response analysis.
Limit states considered include maximum story drift, damage to nonstructural components and
content, and low cycle fatigue damage to members and connections. The risks implied in the
current procedure, for example those based on various Rw factors for different structural types,
will be calculated and their consistency examined.
INTRODUCTION
The commonly accepted philosophy in design of a building under seismic loads is to ensure
that it will withstand a minor or moderate earthquake without structural damage and survive a
severe one without collapse. To implement this design philosophy successfully, however, one has
to take into consideration the large uncertainties normally associated with the seismic excitation
and the considerable variabilities in structural resistance because of differences in structural type
and design and variability of material strength. This has not yet been done in current practice in
building design, although the need for consideration of the uncertainty involved has long been
recognized, especially in design of nuclear structures.
A large amount of knowledge has been accumulated from experience on building
performance in recent earthquakes, such as those in Japan, Mexico, and this country; also,
considerable progress has been made in the structure reliability research. In view of these
developments, the objective of this research is, therefore, to evaluate the performance and safety
of buildings designed according to the recently proposed and adopted procedures; namely, the
provisions recommended by the Structural Engineers Association of California (SEAOC) and
Uniform Building Code (UBC). Emphasis is on the realistic modeling of the specific buildings,
340
the nonlinear inelastic behavior of such design, and the quantification of the effect of the large
uncertainties in the seismic excitation. The tasks required for this research are summarized in the
following.
(1) Selection of Site and Risk Analysis. Two sites are considered for the study of building
response, both in Southern California. One of these is close to a major fault and the other one
is at some distance from it (see Fig. 1). The potential future earthquakes that present a threat to
the site are characterized as either characteristic or non-characteristic. The former are major
seismic events which occur along the major fault (Fig. 1) and with relatively better understood
recurrence time behavior [1], the latter are minor local events that their occurrences collectively
can be treated as a Poisson process [2,3]. The major parameters of the characteristic earthquake
for risk analysis are recurrence time, magnitude, epicentral distance to the site and attenuation,
whereas those for non-characteristic earthquakes are occurrence rate, local intensity, and duration.
They are treated as random variables and used as input to the random process ground motion
model.
(2) Modelin!: of Ground Motion. The ground motion model is that of a nonstationary random
process whose intensity and frequency content vary with time [4]. This model allows straight-
forward identification of parameters from actual ground accelerograms, computer simulation of
the ground motion for time history response analysis, and analytical solution of inelastic structure
response by method of random vibration [5]. For the site where actual earthquake ground motion
records are available (i.e., Imperial Valley, California), model parameters are estimated and used
to predict structural response to future earthquakes. For the site where no such records are
available, a procedure has been established to determine the model parameters as functions of
those of the source, i.e., magnitude, epicentral distance, etc. based on information given in [6].
Also, for sites close to the fault, the important directivity effect [7] of the rupture surface is
considered in the ground motion model which is known to affect significantly the frequency content
and duration of the ground motion.
(3) Buildin!: Desim. The proposed study requires the design of six low-rise steel building types.
Using the 1988 Uniform Building Code (UBC) designation, the types of building included in this
study are as follows:
(1) Ordinary moment-resisting space frame (OMRSF)
(2) Special moment-resisting space frame (SMRSF)
(3) Concentric braced frame (CBF)
(4) Eccentric braced frame (EBF)
(5) Dual system with CBF
(6) Dual system with EBF
341
Two types of buildings, OMRSF and SMRSF, are designed in accordance with the 1988
UBC specifications using a computer software developed at University of lllinois, IGRESS2 [8].
Practicing engineers were consulted to ensure that the designs conform with the current design
procedures. Typical floor plan and elevation views are shown in Fig. 2. Only the perimeter frames
are designed to carry seismic loads. The interior frames are designed with pinned connections at
the end of the arch girder. Design loads, member sizes, total weight and Rw values used are
included in Tables 1 to 4. In order to use the higher Rw value allowed for SMRSF, the Code
demands more stringent detailing, e.g., increased control of local buckling of the members and
control of the location of plastic hinge formation. SMRSF is subject to lesser base shear and its
design is more likely controlled by the drift limitation rather than strength requirements. CBF and
EBF buildings are also considered.
(4) Response and Dama2C Analysis. For a specified set of ground motion and structural
parameters, each building is analyzed for a large number of ground motions in order to determine
the statistics of responses and levels of damage. The following response characteristics for each
frame are compared: (1) story drifts; (2) damage ofnonstructural elements; (3) energy dissipation
demand; and (4) damage index. An upgraded version of the well known finite element program
DRAIN 2DX [9] is use for the simulations. This program includes elements that may be used to
model structural elements. The damage of nonstructural element and the maximum story drift are
directly and indirectly related to the damage of a real buildings; nonstructural elements such as
partition walls, cladding, etc. Structural damage appears in the form of local buckling and fracture
around joints. This has been confirmed by the US/Japan cooperative full-scale tests and from
previous studies of hysteretic energy dissipation.
A parallel analysis of response based on a time domain random vibration method [5] is also
carried out. This method gives response statistics of interest in this study such as maximum
displacement and hysteretic energy dissipation. Hysteretic energy dissipation demand and the
number of inelastic excursions are directly related to the damage index. A damage index based
on concept of low-cycle fatigue life for the beam to colunm connection, [10] is developed.
Mer each time history analysis, important response quantities are extracted, used for the
calculation of damage index and stored. The results from these simulations are compared with the
random vibration analysis results. As an example, Fig. 3 shows the response statistic of SMRSF
and OMRSF under future excitations characterized by the ground motion of the Corralitos station
of the 1989 Lorna Prieta earthquake. Only the mean responses are shown and compared with the
design value and response under actual Loma Prieta ground acceleration. The coefficient of
variation of the drift is 40% at the first floor and decreases to 11 % at the roof.
342
(5) Limit State Risk Evaluation. Based on first and second order reliability method, the fast
integration technique [11] for time variant systems, the effect of ground motion and structural
parameter uncertainties are included and the risk per year for a given period of time of the
foregoing limit states of interest will be evaluated. For this purpose, methods based on a response
surface technique [12] are also considered which generally increases the efficiency of the method
when the number of parameters considered is large. This will be especially effective in connection
with the simulation study.
(6) Annraisal of Current Code Procedures. The consistency of current design procedures will
be examined based on the result of the risk analysis. Emphasis will be on identifying implication
of various factors used in the design, in particular, the Rw reduction factor, for different building
types. Also, risks of joints and bracing members against fatigue type cumulative damage will be
examined.
ACKNOWLEDGMENT
This research is supported by National Science Foundation under grant NSF CES-88-
22690. The support is gratefully acknowledged.
REFERENCES
(4) Yeh, C. H. and Y. K Wen. "Modeling of Nonstationary Ground Motion and Analysis of
Inelastic Structural Response," Journal of Structural Safety, 1990.
(5) Wen, Y. K "Method of Random Vibration for Inelastic Structures," Applied Mechanics
Review, Vol. 42, No.2, February 1989.
(6) Trifunac, M. O. and V. W. Lee. "Empirical Models for Sealing Fourier Amplitudes Spectra
of Strong Earthquake Accelerations in Terms of Magnitude, Source to Station Distance, Site
Intensity and Recording Site Conditions," Soil Dynamics and Earthquake Engineering,
Vol. 8, No.3, July 1989.
(8) Ghaboussi, J. "An Interactive Graphics Environment for Analysis and Design of Steel
Structure," IGRESS2, Version 2.0, Prairie Technologies Inc., Urbana, Illinois 6180l.
(9) Kannaan, A E. and G. H. Powell. "Drain-2D: A General Purpose Computer Program for
Inelastic Dynamic Analysis of Plane Structure," Report EERC-73/06, University of
California at Berkeley, April 1973.
(11) Wen, Y. K and H. C. Chen. "On Fast Integration for Time Variant Structural Reliability,"
Probabilistic Engineering Mechanics. Vol. 2, No.3, 1987, pp. 156-162.
(12) Iman, R. L. and J. C. Helton. "A Comparison of Uncertainty and Sensitivity Analysis
Techniques for Computer Models" Report NUREG/CR-3904, Sandia National Laboratory,
Albuquerque, New Mexico.
STORY C2 C3 C4 Cl C5
STORY C2 C3 C4 Cl CS
OMRSF SMRSF
STROY Gl G2 Gl G2
, CENTRAL CREEPING
, SEGMENT
PARKFIELD SEGMENT
N COACHELLA V ALLEY
1
SEGMENT
o 100 km
I I
40'
30'
40'
6 @ 30' - 0 = 180'-0
o 0
13'
o 0
13'
o 0
13'
o 0
13'
o 0
15'
r--~
Fig. 2 Floor Plan and Elevation View of OMRSF and SMRSF Studied
347
HF
I
,
5F -L'l L, L_-,
,: :,
,: I :,
,: :,
a:i 4F 1/---- l, :l-,
""
1! I
~
~
...0
t>,
~
~
2F - -, SMRSF
,
l ~
I I
I I
1F
O. 1. 2. O. 000. 1600.
Drift ralio envelope (%) Shear envelope (kips)
HF ,, ::
1-, I:
,, ::
1 ....... _ _ "
5F
r:
a:i 4F L, LI-..."
,:
""
1! I ,:
,:
...
,,",
L, 'l,.-,
0
--'
CI)
3F ,, ::
I 1
2F OMRSF ~ ,-,-
I
I
1F
o. 1. 2. o. 000. 1600.
IJrifl ralio envelope (%) Shear envelope (kips)
ABSTRACT
Simple analytical methods are shown for stochastic nonlinear dynamic analysis of offshore jacket
and jackup structures. Base shear forces are first modelled, and then imposed on a linear
lDOF structural model to predict responses such as deck sway. The force model retains the
effects of nonlinear wave kinematics and Morison drag on base shear moments, extremes, and
spectral densities. Analytical models are also given for response moments and extremes. Good
agreement with simulation is found for a sample North Sea jackup. The effects of variations in
environmental and structural properties are also studied.
INTRODUCTION
Jackup platforms have become a standard tool for offshore operations, to water depths of about
100 meters. Recent trends have extended their use to deeper, more hostile environments, such
as all-year operation in the North Sea. This places additional demands on the jackup, whose
horizontal stiffness is typically an order of magnitude less than that of a corresponding jacket
structure. Due to its flexibility, the fundamental period of a jackup may typically range from
3 to 8 seconds. Because there may be considerable wave energy at these frequencies, dynamic
effects should not be neglected.
Simple analytical methods are shown here for stochastic nonlinear dynamic analysis of
jackets and jackups. Base shear forces are first modelled, and then applied to a linear lDOF
structural model to predict responses such as deck sway. The force model retains various nonlin-
ear effects, such as nonlinear wave kinematics and Morison drag. Analytical results are given for
the marginal moments, extreme base shear fractiles and power spectra. These spectra include
the additional high-frequency content induced by nonlinearities, which may increase resonant
response effects. Corresponding analytical models are also given for response moments and
extremes.
Good agreement is found with simulated results for a sample North Sea jackup. The
effects of variations in environmental and structural properties are also shown. Increasing wave
height and integration of particle velocities to the exact surface are found to significantly affect
both gross force levels and the relative contribution of nonlinear, non-Gaussian effects. In
contrast, varying structural properties may have somewhat offsetting effects. Larger periods or
smaller damping yield greater dynamic effects, which generally raise response variance but often
reduce higher response moments. Gaussian models may then underpredict response extremes,
yet overestimate their variation with structural period and damping.
350
The gross effects of waves and current are shown by the total base shear and overturning moment
they cause. We focus here on the base shear F(t) on jackup and jacket structures, predicting
its mean, variance, power spectrum, skewness and kurtosis. The former three quantities are
sufficient for linear response analysis if F(t) is Gaussian; the latter two reflect non-Gaussian force
effects. We conclude this section by using these statistics to estimate maximum base shear levels
and force spectral densities. Entirely analogous techniques may be used to estimate statistics of
overturning moment. In the next section, these force statistics are used to analytically predict
nonlinear effects on structural responses such as deck sway.
Two levels of modelling are considered: conventional state-of-the-art simulation for ran-
dom irregular waves and a new analytical approach. Particle velocities and accelerations below
the mean water level z=o are found from linear wave theory, and "stretched" to the exact surface
through vertical extrapolation of their values at z=O. The nonlinear Morison model is used to
estimate the wave force on each leg per unit elevation; this result is integrated over elevation to
provide the total base shear at any time.
(1)
in terms of the significant wave height H., the peak spectral period Tp, and the spectral peak
factor '"Y. The constant C("()=l- .287In(,,() is introduced to roughly preserve the total spectral
area 0'~=H;/16 (Andersen et ai, 1987). This reference also suggests the following peak factor:
(2)
In the special case where '"Y=1, Eq. 1 gives the ISSC or Pierson-Moskowitz spectrum.
We consider here long-crested waves travelling in the positive x-direction. We seek the
wave elevation '7z(t) as a function of location x and time t, as well as the horizontal wave particle
velocity and acceleration, uz ... (t) and uz ... (t), for various x, t, and elevation levels z. Based on
linear wave theory, these quantities can be simulated as follows (Borgman, 1969):
'7z(t) ) \ COS(tPi) )
\ uz ... (t) = L Ai WiT(Z,Wi) COS(tPi) ; tPi = Wit + 4>i - kix (3)
uz ... (t) i -w;T(z, Wi) sin( tPi)
in terms of independent, uniformly distributed phases 4>i at the fixed frequencies Wi. Note
that the only exogenous information in ~his result is the wave elevation spectrum, S'1(w), The
amplitudes Ai are chosen from this spectrum as
(4)
The wave number ki and transfer function T(z, Wi) corresponding to frequency Wi in Eq. 3 are
found by solving
2 k h(k d) T( .) _ coshki(z + d) (5)
Wi = g i tan i; z, W, - sinh(kid)
351
The first result here is the dispersion relation, which must generally be inverted numerically
to find Ie; from Wi' The sums in Eq. 3 are typically evaluated with the Fast Fourier Transform
("FFT"), although increased efficiency may be gained through the real-valued Fast Hartley
Transform, or "FHT" (Winterstein, 1990). This reference also shows FHT simulation of non-
Gaussian sea surfaces. '
Note that different wave components in Eq. 3 travel at different speeds. For relatively deep
water, Eq. 5 gives )../T2=g/(27r)=1.56 m/s2. Various wave components may thus be reinforced
or cancelled, depending on the relation between their wave lengths and structural dimensions.
For example, resonant jackup response may be due to 5-6 second waves, with lengths (40-60
m) comparable to typical leg spacings. In contrast, the leg spacing may be comparable to the
half-wavelengths of longer, 7-9 second waves. The net force due to these slower wave components
may then be reduced, due to their incoherence.
Finally, the total base shear F(t) is found by applying the Morison force model to each
leg:
(6)
in terms of the drag and mass coefficients, kd and km' the particle velocity u and acceleration Ii
from Eq. 3, and the current velocity uo. Some formulations use Eq. 6 with the relative particle
velocity with respect to the structure; this is not done here. Above the mean water surface
(z > 0), we employ constant stretching of the linear wave theory values at z=O:
(7)
Unlike simulation with linear kinematics, proper stretching requires that we explicitly
simulate the elevation T/ in Eq. 3, along with particle velocities and accelerations. While FFT or
FHT simulations yield entire histories of these quantities simultaneously, the force integration
in Eq. 6 must then be performed numerically at each time step. The associated numerical costs
motivate the need for analytical models, such as those considered below.
(9)
Thus, the wave force and kinematics models are reflected through the base shear force,
F( a, 4», for regular waves with various amplitudes a and phases 4>. The number of force evalu-
ations needed can often be reduced through numerical quadrature. For example,
N N
E[Fm] Ri LLp;pj[F(a;j,4>;j)]m j a;j = 0.25H.J€1 + €1, 4>;j = tan- 1 (€jj€;) (10)
;=lj=l
in terms of the N Hermite quadrature points 6, ... , €N and the corresponding probability
weights p;=N!j[N HeN_1(€;)]2. The values €; can be found by finding all roots of HeN(€;)=Oj
alternatively, various sources have tabulated these values for different N.
Finally, note that Eqs. 9-10 can also use other force and kinematic models, leading to
different forces F(a,4». For example, linear kinematics can be extended with Wheeler, delta
or other stretching models (Gudmestad and Spidsoe, 1990). For models with non-Gaussian sea
surfaces, however, it may be more difficult to assign distributions fA(a) and f~(4)) consistent
with an observed spectrum S,,(w).
(11)
353
ANALYTICAL:
~ SIMULATION (± 10-) ---- Mean Surface
~ - Exact Surface
Z
::!1 1.5 1.5
L...-I
Z 1.5 1.5
::!1
L...-I
1.0 1.0
.....blI
U'J. 0.5 0.5
rn
rn 1.5 1.5
Q)
r::
~ 1.0 --- -------
III 1.0
Q)
~
U'J.
0.5 ~ 0.5
0.0 0.0
.....rnrn 8 8
0
~ 6 6
s...
:;$
~ 4 4
4 6 8 10 12 14 0 0.2 0.4 0.6 0.8 1
Wave Height [m] Current [m/s]
]figure 1: Base shear moments for various wave heights and current.
354
....
fI)
EXTREME BASE SHEAR ANALYTICAL:
I 2-moment Gaussian
;:J
o SIMULATION (± la)
..c: - - 4-moment Hermite
(')
.5 II
~12.5
z
6
Q) 10.0
....o
~
OJ 7.5
E
....OJ
~ 5.0
><
OJ
~
:a'"OJ 2.5
:::;!
6 B 10 12
in which c4=h/1 + 1.5(a4F - 3) -1)/18, c3=a3F/(6 + 36c4), and K.=I/Jl + 2c~ + 6c!. Alterna-
tive models are available for kurtosis values a4F < 3, although such cases are not encountered
here.
The p-fractile extreme force in period T can be estimated from Eq. 11, taking U as the
corresponding Gaussian extreme estimate:
_ [ ( nT )]1/2 (12)
Up - 2ln To In(l/p)
Here To is the average period and n=1 or 2 for maxF(t) or maxIF(t)l, respectively. Figure 2
shows median estimates of maxF(t) in a typical seastate of duration T=3 hours, using Eq. 12
with n=l, p=.5, and To=Tp =12.5s. The Hermite results use Eqs. 11-12 with all four analytical
force moments estimates; the Gaussian model uses only the first two force moments and sets
C3=C4=0 in Eq. 11. (Only stretched kinematics to the exact surface have been considered in
Figures 2-5.)
Note that although the force mean and standard deviation have been accurately estimated
(Figure 1), a Gaussian model based on these values underestimates extreme 3-hour forces by
roughly 50% in the large-H, seastates that govern design. The Hermite models use the extra
force skewness and kurtosis information to produce markedly improved results. In view of the
accuracy in all four predicted force moments (Figure 1), the roughly 10% conservative bias in
4-moment extreme estimates is due to the Hermite model (Eq. 11). From systematic study of
various nonlinearities, this error lies within the scatter of various nonlinearities with common
first four moments (Winterstein and Ness, 1989). Moreover, the Hermite model is often found
conservative with respect to various actual nonlinear mechanisms.
Force Spectral Densities. Finally, Eq. 11 implies the following simple relation between the
correlation functions of F(t) and the underlying Gaussian process U(t) (Winterstein, 1990):
(13)
The underlying Gaussian correlation function pu(r) may be taken from an equivalent linear
force model, for example. The nonlinear terms P'b and p~ show the increased power at two and
three times the principal frequency, respectively, induced by the nonlinearity.
Figure 3 shows corresponding spectral densities, based on the Fourier transform of Eq. 13.
For simplicity, the power spectrum of U(t) is taken as the wave elevation spectrum in Eq. 1.
355
..
El
210 12
()
cu
...«I~101l
cu
'ii 1010
cu
rn
«I
o:l
109
0 0.1 0.2 0.3 0.4 0.5
Frequency, f [Hz]
The Hermite model based on four analytical moments again shows better agreement with sim-
ulation than the Gaussian model. Although the wave spectrum is rather narrow (-y=4.5), both
simulation and Hermite model show rather weak modes at the higher harmonics f=2/Tp =.16 Hz
and 3/Tp =.24 Hz. There is a general increase both in low- and high-frequency power, however,
not reflected by the wave elevation spectrum-perhaps a factor of about 3 in high-frequency
spectral ordinates.
In this section, we seek similar moment-based Hermite models of dynamic responses. We fo-
cus here on the deck sway response, from which local member stresses can be estimated. The
necessary four response moments for these models can be estimated in various ways, includ-
ing systematic non-Gaussian closure (Winterstein and Ness, 1989) and time-domain simulation
(Ll2!Seth et al, 1990). (By coupling simulation with Hermite models of extremes, the simulation
duration and cost may be reduced because only a limited number of response moments need be
reliably estimated.) These techniques permit arbitrary nonlinear structural behavior, as well as
Morison drag forces based on relative velocities.
We consider here simpler cases of linear structures under non-Gaussian loads. Various
specialized techniques can be applied in these cases to estimate response moments. Recursive
moment relations can be used (Grigoriu and Ariaratnam, 1987; Krenk and Gluver, 1988), if the
force is modelled as a functional transformation of a filtered white noise process. Alternative
closed-form moment results, which permit a force with arbitrary spectral density, follow from
a separable response cumulant model (Koliopulos, 1987). Due to its convenience, we use this
separable model here.
Working in the time domain, we describe the force by its correlation function pp(l') and
the linear structure by its impulse response function h(l'). The response mean and variance, mx
and Uk, are related to those of the force as follows (Lin, 1976):
mx =
mp
roo h(l')dl'; U~ = 10roo h(l')Q(l')dl' where Q(l') = 10roo h(U)pp(l' _ u)du
10 Up
(14)
This result for Uk requires a double integration, or equivalently, a single integration once
the inner integral for Q(l') has been found for various lags r. Significantly, once Q(l') has
356
been found, additional single integrals yield higher-order moments with the separable model
(Koliopulos, 1987):
Qsx 1;0 h(r)Q2(r)dr Qu- - 3 10 h(r)QS(r)dr
00
(15)
Qsp = [/;0 h(r)Q(r)dr]S/2; au - 3 = [10 h(r)Q(r)dr]2
00
Once obtained, these moments can be used to estimate response extremes and power spectra as
in Eqs. 11-13.
We adopt here a one-degree-of-freedom (lDOF) structural model, so that h(t) = exp( -!:"wnt)
sin(wcit) /(mwci) in terms of the natural frequency W n , damping ratio !:", and damped frequency
Wci=Wn~. Note that the foregoing results apply equally to multi-degree-of-freedom linear
systems, in which h(t) is a linear combination of 1DOF modal impulse responses.
Figure 4 shows deck sway moments for various natural periods, with 1% damping and
modal mass m=21x10s tonnes (including both deck and legs). Analytical results are based on
Eqs. 14 and 15, with force correlation function PF(r) from Eq. 13. Both the response mean and
variance grow with Tn, the former due to decreasing stiffness and the latter due to increased
resonance. At the same time, however, as Tn grows the non-Gaussian force is more effectively
"averaged" by the structure, leading to reduced higher response moments. Thus, dynamics may
somewhat lessen nonlinear/non-Gaussian effects due to force nonlinearities.
Similar offsetting effects occur with various damping levels: the response variance in-
creases with decreasing damping, but the higher moments decrease at the same time. As a
result, extreme responses may vary less rapidly with structural properties, such as period Tn or
damping!:", than the Gaussian model predicts. This is shown by Figure 5, which shows predicted
and simulated response extremes versus damping ratio for Tn =6.59s, the estimated period of
the jackup under study. While the Gaussian model generally underestimates response extremes,
it overestimates their variation with ,. This suggests that in predicting extreme response, the
choice of damping value may be less significant than the Gaussian model implies.
CONCLUSIONS
• In addition to gross force levels, the relative contribution of nonlinear, non-Gaussian effects
on base shear grow with wave height and integration to the exact surface. (Figure 1).
• Base shear moments are accurately predicted by an analytical narrow-band model, which
requires a double integration over amplitude and phase (Eqs. 9 and 10). Resulting base
shear extremes and power spectra are accurately estimated from these moments with non-
Gaussian Hermite models (Eqs. 11-13; Figures 2 and 3).
• For linear jackup structures, either 1DOF or MDOF, convenient analytical results have
been shown for response moments (Eqs. 14 and 15). These agree well with simulation (Fig-
ure 4), and their analytical form makes them useful when combined with outer integration
over random environmental variables.
• Resonant effects lead to increasing response variances with decreasing damping or increas-
ing period Tn (approaching the wave period). Response extremes may grow less rapidly,
because dynamic effects may somewhat reduce nonlinear/non-Gaussian effects due to force
nonlinearities. Gaussian models may then underpredict response extremes, yet overesti-
mate their variation with structural period and damping.
357
~ 0.08 0.08
I::: 0.06 0.06
CIl
V
::;:: 0.04 0.04
0.02 E---...-=t:.:..:..------------::l 0.02
Z 0.3 0.3
S 0.2 0.2
OIl
in 0.1 0.1
fIl
fIl
V 0.4 0.4
I:::
~ 0.2 0.2
V
.:.:
rn 0.0 F----------=-======::::!=~ 0.0
4.5 4.5
·m0
fIl
4.0 4.0
...., 3.5 3.5
~
;j 3.0 ~!:==::=~~=*=~ 3.0
:::.::
3 456 ?
Structural natural period [s]
S
v 1.2
fIl
I:::
0
Po
v 1.0
fIl
s...
v
E
V
0.8
s...
....,
X
V 0.6
I:::
CIl
:aV 0.4
~
0.01 0.02 0.05 0.1 0.2
Damping ratio. <"
Topics of planned future study include more general environmental models. These may include
variable current profiles with depth, short-crested seas, and, perhaps most importantly, wind
as well as wave and current in possibly different directions. More general structural behavior
will also be considered, including geometric and soil nonlinearities, relative velocity effects, and
fatigue failure prediction.
The wave force models used here are continuations of work with Sverre Haver of Statoil, begun
during his recent visit to Stanford. Financial support for the first author has been provided by
the Office of Naval Research, Contract No. N00014-87-K-0475, and by the Reliability of Marine
Structures Program of Stanford University. The second author has received support from A.S
Veritas Research, a subsidiary of Det norske Veritas.
REFERENCES
Andersen, O.J., E. Fl2lrland, S. Haver, and P. Strass (1987). Design basis environmental condi-
tions for Veslefrikk. Rept. 87004A, STATOIL, Stavanger, Norway.
Borgman, L.E. (1969). Ocean wave simulation for engineering design. J. Waterways Harbors,
ASCE, 95(4), 556-583.
Grigoriu, M. and S. T. Ariaratnam (1987). Stationary response of linear systems to non-gaussian
excitations. Proc., [CASP-5, ed. N.C. Lind, Vancouver, B.C., II, 718-724.
Gudmestad, O.T. and N. Spidsoe (1990). Deepwater wave kinematics models for determinis-
tic and stochastic analysis of drag dominated structures. Proc., NATO-ARW Water Wave
Kinematics, ed. A. Tl2lrum and O.T. Gudmestad, Kluwer Academic Publishers, Dordrecht,
The Netherlands, 57-87.
Haver, S. (1990). On the effects of a joint environmental description and uncertain parameters on
the extremes of a drag-dominated structure. Report RMS-5, Reliability of Marine Structures
Program, Dept. of Civ. Eng., Stanford University.
Koliopulos, P.K. (1987). Prediction methods for non-Gaussian response in linear structural dy-
namics based on a separability assumption. Ph.D. thesis, Civ. Eng. Dept., Univ. College
London.
Krenk, S. and H. Gluver (1988). An algorithm for moments of response from non-normal exci-
tation of linear systems. Stochastic structural dynamics: progress in theory and applications,
ed. S.T. Ariaratnam G.!. Schueller and I. Elishakoff, Elsevier Publishing, Inc., New York.
Lin, Y.K. (1976). Probabilistic theory of structural dynamics, Robert E. Krieger Publishing Co.,
Huntington, New York.
Ll2lseth, R., O. Mo, and !. Lotsberg (1990). Probabilistic analysis of a jack-up platform with
respect to the ultimate limit state. European Offshore Mechanics Symposium, NTH, Trond-
heim, Norway.
Winterstein, S.R. (1988). Nonlinear vibration models for extremes and fatigue. J. Engrg. Mech.,
ASCE, 114(10), 1772-1790.
Winterstein, S.R. (1990). Random process simulation with the Fast Hartley Transform. J. Sound
Vib., 137(3), 527-531.
Winterstein, S.R. and O.B. Ness (1989). Hermite moment analysis of nonlinear random vibra-
tion. Computational mechanics of probabilistic and reliability analysis, ed. W.K. Liu and T.
Belytschko, Elme Press, Lausanne, Switzerland, 452-478.
STOCHASTIC PROGRAMS FOR IDENTIFYING
SIGNIFICANT COLLAPSE MODES IN STRUCTURAL SYSTEMS
Introduction
Background
The structural members are idealized as rigid links connected by nodes. The nodes are
idealized as plastic hinges with unlimited ductility occurring at locations of concentrated
loads and member intersections. The effect of axial force and shear on the plastic moment
capacity of the members is neglected.
360
Two standard methods of plastic analysis exist: the static or equilibrium approach,
and the kinematic or mechanism method. The kinematic method is used in the following
analysis to identify mechanisms. Failure of a frame is said to occur when a mechanism
forms. Given a structure with s potential hinge locations and a degree of redundancy r, the
number of elementary mechanisms is m =s- r. Using the superposition of mechanisms
approach, any kinematically admissible failure mechanism can be obtained from a linear
combination of the elementary mechanisms [5,9J. These elementary mechanisms can be
derived either by visual inspection or by an automatic procedure [12J.
Letting Mpj be the plastic moment capacity of the ph critical section and 8j be the
rotation of the ph section, the internal work associated with a mechanism can be written,
•
Win! = L:Mp ; 18j I (1)
j=l
The external work of any kinematically admissible mechanism can be written in terms of
the external work of the elementary mechanisms, e;, and the participation factors for the
elementary mechanisms, t;,
m
Hr.z ! = L: e;t; (2)
;=1
The safety margin for any mode can then be expressed as the difference between the
internal and external work,
• m
In this analysis, the safety margins are assumed to follow a joint normal distribution.
With independence between the loads and resistances, the mean and variance of the safety
margin can be written as
mz =1 e IT mM - tTme (4)
ui =1 e IT VM I e I +tTVe t (5)
where mM and fie are the vectors of the mean plastic moment capacities and mean el-
ementary mechanism external work terms, respectively. The covariance structure of the
random variables is specified by the variance-covariance matrices, V M and Ve.
The first step in the analysis is to identify the mode with the highest probability of
failure. In order to avoid full enumeration of all possible failure modes, a mathematical
361
program is written to find the mode with the highest probability of failure, considering only
kinematically admissible modes. The mathematical program to identify the least reliable
mode is,
-00
fzdz
-l1tz
= ~(---- ) = ~(-(3)
Uz
(6)
where fz is the probability density function of the safety margins (assumed to be Gaussian),
~(-) is the cumulative normal probability distribution, fl is the reliability index, and Oij
is the rotation of the j'h joint in the ith elementary mechanism.
The decision variables in this mathematical program are the rotations of the joints,
OJ, and the elementary mechanism participation factors, t;. There are 8 linear constraints
to preserve kinematic admissibility at each critical section. The objective function is a
nonlinear function of the decision variables.
An alternative mathematical program is to minimize the reliability index, fl, subject
to the same constraint set. The basis (mechanism) at the optimal solution of this program
is the same as that from the solution of the program given by equations 6 and 7 since the
~(-) function is a monotonic transformation.
(8)
m
where fz,z, is the joint density function for the modes 1 and 2 ( the bivariate normal
density function). Note that ml and CTl are constants determined by the first cycle of the
analysis and only m2, CT2, and Pl2 are functions of the decision variables, OJ and ti·
In general, once n-l modes have been identified, the nth mode is identified so as to
maximize the system probability of failure. That is, find the mode such that P[Zl :::;
o U Z2 :::; 0··· U Zn :::; OJ is maximized when n-l modes have already been identified. The
mathematical program to find the nth mode is:
where the mean and variance of the first n-l modes are known as well as the correlation
between these modes. Only the mean and variance of the nth mode and the correlation of
the nth mode with the previous modes are functions of the decision variables. The number
of constraints and decision variables is the same as in all previous cycles.
The objective function (equation 10) has to be evaluated many times during the search
for the optimal solution. Unfortunately, multiple evaluations of the multi normal integral in
equation 10 are computationally expensive even when using reduction formulae [10J or with
the use of advanced Monte Carlo techniques. Replacement of the objective function with
Ditlevsen's lower bound [2J for a n-member series system provides an efficient alternative
to the evaluation of the multinormal integral. Using the lower bound approximation, the
objective function of the mathematical program to find the nth mode is
n i-·l
Max.: P[ZI:::; OJ -/- L max{(P[Zi :::; OJ - L P[Zi :::; 0 n Zj :::; 0]) ,O} (12)
i=2 j=l
subject to the constraints of equation 11. Use of the lower bound requires solution of the
mathematical program by non-gradient based techniques (e.g. a Hooke and Jeeves routine
[11]) since gradients for equation 12 are not easily computed.
The lower bound is most accurate when the correlation between the modes is relatively
low. Modes selected by the mathematical programs tend to have low correlation since
these generally contribute more to system reliability than modes with high correlation.
Thus, in the context that the lower bound is being used here, the approximation is quite
accurate.
363
Local Optimality
It can be shown that the mathematical programs set out in eqnations 6-12 are non-
convex programs [l1J. Solutions from the optimization, while Kuhn-Tucker points, cannot
therefore be guaranteed globally optimal. Thus, a strategy to obtain a group of locally op-
timal solutions which contains the global optimum is necessary. As is typical in the solution
of non convex programming problems, a set of starting points is selected and the optimiza-
tion is performed several times beginning at these initial bases. Experience has shown that
using a set of starting points that is physically meaningful is the most successful. The set of
elementary mechanisms augmented with a set of combinations of elementary mechanisms
was used as starting points for the optimization. The number of starting points used was
never greater than 1.5m, where m is the number of elementary mechanisms.
The nonconvexity of the mathematical program can be illustrated. Noting that the
constraints in equation 7 can be solved for the rotations, OJ> these decision variables can be
eliminated from the optimization, leaving an unconstrained optimization problem with
the participation factors, t i , as the only decision variables. For a single story, single
bay structure, there are four elementary mechanisms and thus, four decision variables.
Holding two of the t's constant, the objective function is then plotted. The objective
function surface (probability of failure) is shown in Figure 1 plotted against two of the
participation factors. There is a slight dip between the two ridges indicating two local
optima corresponding to beam and combination failure modes. The combination mode is
the global optimum but it can be seen if a starting point is selected on the left side of the
figure (say t2 s: 0) the optimization will lead to a locally optimal beam failure mode.
The two story, two bay structure shown in Figure 2 was taken from Ma and Ang [6J
for analysis using the procedure outlined for selection of failure modes. The properties for
each material group and load statistics for this structure are shown in Table 1. Moment
capacities of all sections within a material group are assumed to be perfectly correlated, and
capacities of sections belonging to different material groups are assumed to be uncorrelated.
The random variables are distributed such that the corresponding safety margins are jointly
normal.
The least reliable mode was found by solving the mathematical program of equations 6
and 7, and by minimizing the reliability index, {3, rather than maximizing the probability
of failure. Eighteen starting points were used and optimal solutions were found using a
364
gradient-based nonlinear optimization program, MINOS [7]. To find the remaining modes,
the mathematical program shown in equations 11 and 12 was solved by using Ditlevsen's
lower bound as the objective function. The constrained problem was converted to an
unconstrained optimization by solving the constraints and substituting into the objective
function. A modified Hooke and Jeeves direct search technique was used to find solutions
to the mathematical program.
The solution from these mathematical programs is shown in Table 2. Due to the non-
convexity of the problems, locally optimal solutions were found using the various starting
points at each cycle. Only the best solution from each cycle was retained. The first column
shows the cycle number and the second column describes the mode type found during that
cycle. The third column lists the reliability index for the mode listed in column 2. The
fourth column shows the value of the objective function at each cycle (i.e., the value of
Ditlevsen's lower bound using this mode and the previously determined modes). After the
sixth cycle, no modes could be found which caused any significant increase in the estimate
of the system probability of failure. Based on these six modes, an estimate for the system
probability of failure using the lower bound was 0.131.
To determine the accuracy of using the lower bound, Monte Carlo simulation was
performed using the six identified modes and an estimate for Pi' was found to be 0.132.
Ma and Ang estimated the system probability of failure to be 0.135 by performing Monte
Carlo simulation with 48 modes identified by enumeration.
Conclusions
References
9. Neal, Bernard G., and P. S. Symonds. "The Rapid Calculation of the Plastic Collapse
Load for a Framed Structure." Journal 0/ the Institution of Civil Engineers, part 3,
Vol. 1, 1952, pp. 58-71.
10. Plackett, R. L. "A Reduction Formula for Normal Multivariate Integrals", Biometrika,
41, 1954, pp. 351-360.
11. Smit.h, Alan A., E. Hinton, and R. W. Lewis. Civil Engineering Systems Analysis and
Desi~n~ New York: John Wiley and Sons, 1983.
12. Watwood, Vernon B. "Mechanism Generation for Limit Analysis of Frames." Journal
of the Structural Division, ASCE, Vol. 109, No. S'1'l, Jan., 1979, pp. 1-15.
366
Beam Combination
F3 F4
P
M4 M6
M3 M3 M3
F} F2
2P
M2 Ms
Ml Ml Ml
Takeru Igusa
Department of Civil Engineering, Northwestern University
Evanston, Illinois, 60208 U.S.A.
SUMMARY
The relationships between the parameters of a structural system and its wide-band
response are explored. The goal is to identify and characterize critical system
configurations for which the response may become highly sensitive to small parameter
variations. Such configurations are useful in design applications since minor
modification of the system may lead to a significant reduction in response. The
investigation, which centers on an examination of analytical expressions for the modal
properties and mean-square response, shows that critical systems can be identified with
mode crossing sets defined in the space of variable parameters. Emphasis is on
characteristics associated with the intrinsic dynamic properties of the system rather than
user-defined parameterizations. Two examples demonstrate the insight the results of this
paper provides into the system dynamic properties and their relation to the response.
1. INTRODUCTION
The relationships between the parameters of a system and its response to dynamic
loads are important in design and reliability assessment of structural systems. Although
such relationships can be quite general, there frequently exists critical configurations for
which small variations in the system parameters lead to large variations in the system
response. Such configurations are useful in design applications since minor modification
of the system may lead to a significant reduction in response. In addition, they have a
Significant impact on probabilistic analysis since small uncertainties in the system
parameters may lead to large uncertainties in the response. Consequently, the
identification and characterization of critical configurations is an important problem in
structural dynamics.
It is well known that for harmonically loaded systems, two critical configurations are:
resonance, which is easily identified by the natural frequencies of the system, and
nodalization, where modal harmonic responses cancel each other at certain points, or
nodes of the structure [1]. The identification of such critical configurations is useful in
developing optimal design strategies and has found applications in aerospace and
rotorcraft systems which experience harmonic loads [1-4]. Studies of periodic and nearly
periodic systems have revealed that the propagation of waves are sensitive to the
variations of the coupling mechanisms and other dynamic properties of each repeating
370
substructure [5,61. It has been shown both mathematically and experimentally that such
sensitivities are due to localization of the response [7,81.
Recent work by Igusa and Der Kiureghian [91 has shown examples of critical system
configurations for systems subjected to wide-band excitation. However, due to differences
of the nature of the excitation, critical configurations for wide-band excitation differ from
those found for harmonic loads and wave propagation problems. The goal of the present
work is to develop a method to predict, identify, and characterize such configurations.
The identification problem is restricted to dynamiC aspects of the problem in order to
exclude cases of ill-defined parameterizations which can be resolved by simple re-
formulation of the parameters. Due to the wide range and complexity of the general
problem, attention is focused on parameters that affect stiffness-related properties of linear
systems. Damping, loading, and material and geometric non-linearities can also be
considered variable parameters, but such considerations are beyond the scope of this paper.
In the first part of this paper systems consisting of two coupled modes are analyzed.
It is found that the only possible critical configuration is where the frequencies of the two
modes approach each other. It is shown that at such configurations, the modal properties
undergo a sudden shift of character, which was first noted in an entirely different context
by Leissa [101 and was termed "curve veering". The corresponding points in the parameter
space are defined to be the mode-crossing set.
The sensitivity of the response is found to be governed by two parameters: a modal
coupling parameter and a ratio of modal response coefficients. It is shown that
amplification or reduction of the wide-band response at a mode crossing may occur and
can be predicted by the values of these parameters. However, for some parameter values,
the effect of a mode crossing may have little influence on the response, even if severe
"curve veering" of the eigenvalues occurs.
The generalization to systems with more than two modes has been given in
Reference [111. In this paper, a brief summary of those results are presented.
The concepts in this paper are illustrated by two contrasting examples.
respectively, where q and r are constant vectors and the scalar function w(t) is a white-
noise process with constant power spectral density Go. General filtered-white-noise
371
excitation can be modeled by a simple reformulation of the system matrices in equation (1)
[13].
Let P denote the vector of parameters which affect the stiffness properties of the
system by the functional relationship K = K(p). As stated in the introduction, it is assumed
that the damping and mass matrices are constant with respect to p. It is also assumed that
the system is classically damped (where the free vibration undamped mode shapes
diagonalize the damping matrix) and that the modal damping ratios are small in relation
to unity. The parameterization of the system is not of interest in this paper; therefore, by
proper definitions of system parameters, it can be assumed that the stiffness matrix is not
highly sensitive to variations in the parameters
dkik kik
for all i,kJ (3)
dPj - Pj
The mode shapes, frequencies, and modal damping ratios of the system are denoted by
fl»i(P), Wi(p), and t;i' respectively, where the modal masses are normalized to unity.
The root-mean-square of the response y(t) is the simplest and most fundamental
response quantity for wide-band excitations [12] and is used throughout this study. Since
the parameter p affects the stiffness matrix, which in turn influences the response y(t), the
root-mean-square response is a function of p and is denoted R(p). A system is considered
critical if small variations of the parameters p result in large variations of the response R.
One possible measure of sensitivities is the response ratio
R(Pl)
A(p l'P2) = R(P2) (4)
where the parameter points Pl ,P2 are chosen from distinct regions of the parameter space
defined in general terms based on the dynamic properties of the system. Large, small, and
unit values of A(Pl'P2) correspond to amplification, reduction, and no change of the
response at Pl relative to the response at P2' The numerical values for Pl'P2 are
dependent upon the particular parameterization of a particular system and can be treated
separately. In this way, the problem is effectively decomposed into the dynamic aspects of
the system and the parameterization. As stated in the introduction, the emphasis on this
paper is on the former.
The possible critical configurations for wide-band response is a subset, M, of the
parameter space P and is termed the critical set. In terms of the measures in equation (4), M
is de fin e d t 0 be Z Z Z(Pl E P such that A(Pl ,P2)« 1 or
A(Pl ,P2)>> 1 for some P2 in a neighborhood of Pl \. The sensitivity measure A(Pl ,P2)
will be used in this paper to identify and characterize M in detail.
section a thorough modal analysis is performed to gain insight into its dynamic
characteristics.
Consider a system with two modes described by a stiffness matrix K(p) dependent on
parameters p. To establish a frame of reference, the base system is defined by zero
parameter values, p = o. Let the base natural frequencies and mode shapes corresponding
~
to the base system be denoted Wi and .i, for i = 1, 2. The modified modal properties
correspond to non-zero parameter values and are determined by the eigenvalue problem
for the modified system given by the 2 x 2 matrix equation
[K(P) - ah] Q = 0 (5)
.i .i
K(p) = [~ll(P) ~12(P)] (6)
kl2(p) k22(p)
~T
where kik = K(p).k and the modified mode shapes are given in terms of Q by the
relation
(7)
The terms kl2(p) are cross-modal stiffnesses which couple the responses of the two modes.
For notational convenience, the diagonal terms are rewritten as
(8)
for i = 1, 2 where w&i (p) are termed shifted frequencies whose physical interpretation will
be discussed later in this section.
.- 2 _ _2
For the base system, p = 0, kt2(O) = 0, and COSi(O) = kii(O) = COi, and the eigenvalue
problem yields the original frequencies and mode shapes. For the modified system, the
characteristic equation is a bi-quartic and the eigenvalue problem is readily solved.
Sackman and Kelly [14] and Sackman, et al. [15] solved the problem for a special class of
two-degree-of-freedom systems and these results are generalized for all two-mode systems
to yield
for i = 1,2, in which the plus sign is associated with i = 1 and the minus sign with i = 2,
CO~a = (CO~1 + CO~2)12 is the average shifted frequency, and
_ 2 2
2kI2(p) fJ = COSI - (QS2
r = 2 2 2 2
(11)
COSI + COS2 COSI + (QS2
373
(1 + r-2[-Ii±..J{l-+ ij )
The parameters chosen in these results are particularly suited for interpreting the
characteristics of the modal properties and the wide-band response.
In order to gain greater insight into the modal properties, special cases are examined
in detail. First, consider the case llil» ,/"
which corresponds to widely-spaced shifted
frequencies in relation to the coupling parameter. The expressions for the modified modal
properties simplify to
IIItj = IIItj; if (US1 > (US2 , for i = 1, 2 (13)
(14)
Next, consider f3 = 0, which corresponds to identical shifted frequencies. In this case,
equations (9) and (12) reduce to
Equations (13) and (14) show that there is a one-to-one correspondence between the modes
of the modified system with widely-spaced shifted frequencies, and those of the original
system. The mode shapes of the modified system are identical to those of the original
system and the frequencies are given by the shifted frequencies. However, for modified
systems with closely-spaced modes where f3 and r are the same order of magnitude, terms
containing the parameter r in equations (9) and (12) become significant. Consequently,
each mode of the modified system contains characteristics of both modes of the original
system. At f3 = 0, equation (13) shows that each mode of the modified system equally
shares characteristics of both modes of the original system.
Equations (13) and (14) show that each mode of the modified system changes in char-
acter from one mode of the shifted system to the other when the value of (US1 - (US2
reverses sign. This change was first noted by Leissa [10] in the examination of rectangular
membranes. An important question is whether this change is real or artificial, brought
upon by the assignment of the index i to the plus/minus sign in front of the radicals in
equations (9) and (12). If these equations for the modal properties are examined for a con-
tinuous change of the shifted frequencies, it can be seen that mode 1 of the modified sys-
tem takes predOminantly mode 1 character of the unmodified system when (US1 > (US2, but
continuously changes to mode 2 character when (US2 crosses and exceeds (US1. A similar
statement can be made for mode 2 of the modified system. In conclusion, the change of
modal properties will always occur (except for the trivial case of decoupled systems defined
bv y= 0). and is auantified bv eauations (9) and (12). Herein. this transition. mathematicallv
374
defined by the change of algebraic sign of the difference of shifted frequencies, WSI - WS2, is
called a mode crossing (using terminology suggested by Triantafyllou [16]). The
corresponding points in the parameter space P is defined to be the mode crossing set Mme
The mode crossing concept is illustrated in Fig. 1, where the modified frequencies Wi
are plotted with respect to a scalar parameter, p. Three pairs of curves are shown
corresponding to three levels of interaction, in which the lower member of each pair
represents Wi, and the upper member represents W2. The curves closely follow those
given by the shifted frequencies WSI (p) and WS2(P) except at the mode crossing. The dotted
lines represent the limiting case of no interaction (y --+ 0), defined by equations (13) and
(14). The lines appear to cross each other, but are actually sharply angled due to a sudden
transition at perfect tuning. Leissa [10] used the term "curve veering" to describe this
phenomena. The other sets of lines represent cases of larger interaction, which results in a
separation of frequencies in the transition and a wider transition zone.
en
Ol
·0
c::
Ol
::l
c-
-....
Ol
Parameter, p
Figure 2. Frequency vs. parameter relation for a 2-DOF system.
The rate of transition (degree of curve veering) at a mode crossing can be rigorously
determined by evaluating the mode derivatives using the closed-form analytical results
derived in this section. It can be shown that the maximum value of the mode shape
derivative is at the the mode crossing, and that the derivative is inversely proportional to
the interaction parameter, 1-
The derivative of the first frequency, aWl / apj' does not have a large maximum value
at a mode crossing, but undergoes a transition from
aWSl
for WSI < WS2
apj
{ (17)
aWS2
for WSI > WS2
dPj
375
as illustrated in Fig. 1. The rate of transition is measured by the second derivative of the
frequency. It can be shown that the second derivative of the frequency is twice the
coefficient of the first derivative of the mode shape. Therefore, the conditions for large
mode derivatives apply to both the mode shapes and frequencies. In other words, the rate
of change of the mode shape at a mode crossing is directly proportional to the frequency
curve veering curvature.
In summary, mode crossings have been defined by the crossing of shifted frequencies
calculated from the diagonals of the modal stiffness matrix. Closed-form analytical
expressions were used to show that modal properties of two-mode systems can become
sensitive to changes in the system parameters only at mode crossings. The degree of
sensitivity is measured by an interaction parameter, where the sensitivity increases for
smaller interaction. This conclusion can be expressed mathematically as follows: Let My(E)
denote the set of parameter points corresponding to systems with with small interaction
parameter values
My(E) = (p E P such that r- E) (18)
where E« 1. Then, the modal properties are sensitive at mode crossings provided that
p E My (E). It is noted that this condition is independent of the system damping.
Another conclusion is that the spacing of the modal frequencies is governed by the
parameters f3 and y: Modes i and j are closely-spaced if
f3 - r- 'a,ij (19)
and widely-spaced if
f3 » 'a,ij or (20)
These criteria represent extreme cases, and in this paper, it is necessary to add an additional
intermediate category of moderately-spaced modes, defined by
1 »~ 7? + r » 'a,ij (21)
Returning to the notion of critical sets, the concepts of mode crossings, closely-spaced
modes, and small interaction can be combined to yield the following concise relation
M c Mer n My('a,ij) (22)
The information and analytical expressions developed in this section is used in the
investigation of the response of two-mode systems to wide-band excitation.
The response ratio A(PI ,P2) is used to quantitatively determine whether a parameter
value PI corresponds to a critical configuration. According to equation (22),
PI E Mer n Mr(t;a,ij), i.e., PI is chosen among systems at a mode crossing where f3 = 0 and
with low modal interaction. The point P2 is chosen in the neighboring region
corresponding to moderately-spaced modes, which satisfies equation (21).
To reduce the number of variables in the response expressions, define
nondimensional variables
~ rk ~ qk (23)
rij = rl; q ij = 7fi
where qi and ri are modal response and input coefficients, respectively, defined by
for i = 1, 2 (24)
and k,l is a permutation of i,j chosen such that Wk rkl ~ Iq I rll. The final expressions for the
response ratio A(p 1 ,P 2) is [11]
(26)
0
.~
.... 1.0
Q)
m
c:
0
a.
Q)
m
a: 0.5
Case I: q12 - r12. For very small interaction r« t;, the correlation coefficient in
equation (26) is approximately P12 '" 1, the first term in brackets in equation (25) is domi-
nant, and
377
1 + Tdl 12
A(Pl'P2) '" ---,===~~ (27)
..; 1 + Ti2qi2
which is a function of the product T12q12. By the definition in equation (23), the range of
possible values for T12q12 is -1 ~ T12 q12 ~ 1; therefore, A(PI ,P2) can be fully characterized by
evaluating equation (27) for all permissible values of T12q12. The result, plotted in Fig. 2,
shows three possible effects. When T12q12 '" 0, then A(Pl'P2) '" I, meaning the response is
unaffected by the changes of modal properties at a mode crossing. However, when
T12q12 ~ -I, then A( P l'P2) ~ 0, corresponding to a cancellation effect between modes, and
when T12q12 ~ I, then A(PI ,P2) ~ Y2, corresponding to a summation effect.
For larger interaction where the approximation P12 '" 1 is invalid, the full general
expression for the response ratio in equation (25) is applicable.
Case II: IT12I» Iq121. By comparing terms in equation (25) it can be shown that
1 + P12 + rr2 (1 - pn) for 0(r12) = 0(1) or rr12l« 1
2
A(PI ,P2) (28)
~ ~ I-P12
T12 ~2 ~2
2(1 + Q12 112)
The first result shows no change of the response at the mode crossing. The second result
implies a very large increase of the response at a mode crossing, and is termed the
magnification effect.
Case III: IQd »lTd . Due to the symmetry of equation (25), this case is similar to case II
with q12 replacing T12 in equation (28)
This completes the analysis of the two-mode system. It has been shown that the
parameter values for critical system configurations lie within the mode crossing set Mmc
and must satisfy small modal coupling conditions. The modal properties for such systems
are characterized by large mode derivatives and closely-spaced natural frequencies with
curve-veering behavior. The influence of mode crossings on the response was found to be
dependent upon the normalized input and response modal coefficients and can have one
of a variety of effects on the response: cancellation, no effect, summation, and
magnification. These conclusions are based on the dynamic properties of the system rather
than its parameterization, and are backed by mathematical expressions for the modal
properties and wide-band response.
378
6. EXAMPLE SYSTEMS
6.1 COUPLED BEAMS
Two contrasting example systems are examined to illustrate the main ideas of this
paper. The first system, shown in Fig. 3, consists of two simply-supported beams connected
by a moment spring with stiffness ke. Beam (A) has unit length, unit mass per length, and
modulus and moment of inertia such that its fundamental frequency is 1 cycle per second
(Hz). Beam (B) has the same mass per length, modulus, and moment of inertia as beam
(A), but has variable length Lb. A single value of modal damping, 1;, is specified for all
modes. The load is a force uniformly applied along one-fourth of the span of beam (A)
with white-noise amplitude, as shown in Fig. 3. The response is the mean square of the
velocity of beam (B), averaged over the length of the beam.
f(t)
+++++ ~
~·"""b·e·a·m·a......:·rj":""·b·e·am"b"""~~
Figure 3. Moment-spring coupled beams.
2
The parameters of the problem are p = (IC,Lb), where IC= kel (1611C ) is the moment
spring stiffness normalized by the rotational stiffness of beam (A) at a supported end. The
root-mean-square responses normalized by the damping ratio, I; R(Lb,IC), are shown in Fig.
4 for three modal damping values: I; = 0.003 (dotted line), 0.01 (solid line), and 0.03 (bold
line), and for three coupling elements: IC = 0.05, 0.5, and 5.0, which correspond to light,
moderate, and strong coupling, respectively.
There are several interesting characteristics of the response curves. First, and most
significant are the response peaks at Lb = 1.0, 1.5, and 2.0. For light coupling the peaks are
an order of magnitude, for moderate coupling the peaks become attenuated, and for strong
coupling the response is relatively constant. Second, variations in the damping has a
significant effect only for the lightly-coupled system. Third, for light damping, the
responses are nearly independent of the spring coupling at Lb = 1.0 and 2.0. The last
379
.0
-l
(f
Q)
en
.1
c::
o
0.
en
Q)
a:
.01~~--~~--~--~~--~~~~--~~--~
which correspond to the uncoupled natural frequencies of beams (A) and (B), respectively.
It can be seen that mode crossings can only occur between natural frequencies of different
beams and that the system is, at most, pair-wise tuned. The modal coupling parameter is
r".. = i (j-n)
2 2
(16)2
7r /C (32)
LlI...wSi+ WSj)
which is directly proportional to the normalized rotational stiffness /c. This parameter
quantifies the relation between the dynamic coupling of the modes and the physical
coupling by the rotational spring.
380
When Lb= 1, then {J)Si = {J)S,i+n for i = 1, ... , n, signifying n pairs of mode crossings
between the modes of beams (A) and (B). The remaining mode crossings are found by
solving {J)Si = {J)Sj, which for small IC has solutions
The mode crossings correspond precisely to the location of the response peaks in Fig.
4. To analytically determine the effect of the mode crossings on the response, the response
ratio A (PI ,P2) for the response of the tuned mode is evaluated using the expression in
equation (25). The result is
Pij
A(PI ,P2) '" (34)
,j it + 41;2
A number of observations can be made from this simple expression.
1. As the coupling stiffness IC is increased, rij increases proportionately (according to
equation (32» and the amplification factor decreases. This explains the attenuation of
the response peaks in Fig. 4 at the three mode crossings for larger coupling stiffnesses.
2. As the coupling stiffness approaches IC -+ 0, then rij -+ 0, and the amplification factor
approaches A (PI ,P2 ) -+ Pii 121; . This implies that the amplification is inversely
proportional to the damping, which is shown for the lightly-coupled system (IC = 0.05)
in Fig. 4. It also implies that for sufficiently small coupling stiffnesses, the responses
should be independent of IC.
3. As the damping approaches I; -+ 0, the amplification factor increases to
A (PI ,P2) -+ Pii I r.j which is independent of the damping. This is also shown in Fig. 4
for both the moderate and light coupling cases where the response peaks increases
and asymptotically approaches a limiting shape for decreasing damping.
It was noted earlier that the normalized responses at the mode crossings are
independent of the coupling parameter at sufficiently small damping. This can be
explained by virtue of the facts that the correlation coefficient becomes Pij '" 0, and the
modal properties at a mode crossing are nearly identical for different coupling values.
Therefore, the mean-square responses are nearly independent of the coupling parameter.
These analytical conclusions are illustrated in this numerical example by the response
peaks at Lb = 1.0 and 2.0 in Fig. 4 which have nearly equal amplitudes for all coupling
values at I; = 0.003. Similar behavior is also observed at Lb = 1.5 for smaller damping
values, but the additional curves are not plotted in Fig. 4 to preserve clarity.
381
Consider a simply-supported beam with two spring supports at one-third span inter-
vals as shown in Fig. 5. The beam has unit length, unit mass per length, and modulus and
fit) ~
The first three modes have the most significant contribution to the response and
their properties are investigated in detail. According to the results of this paper, the critical
configurations of the system are the mode crossings which, for the present system, occur
between the third mode and each of the first two modes. It can be shown that the
corresponding values for the support stiffnesses, which defines the mode crossing set M mc ,
is
2 2
(kl-131,. ) (k2-131,. ) = 40,.4 (35)
Since there are two unknowns and one equation, the mode crossing set consists of curves
in the two-dimensional parameter space. The curves are hyperbolic with two branches at
ki < 131,.2 and ki > 131,.2 which correspond to the mode crossings between the first and
third modes and the second and third modes, respectively, as indicated in Fig. 6.
Analytical expressions for the modal properties have been used [11] to investigate the
responses at parameter points PI = (kbk2) at a mode crossing and P2 = (k' l'k' 2) in the
neighborhood of PI corresponding to moderately-spaced modes. The result is
-54521 + 85.745kI - 0.033712kr (36)
Ai (PI ,P2) r===========:==============:====~::::;:=======c=
-V1.8218x109 - 5.7128x106 ki + 6720.8kr - 3.5155kf + 0.OOO68988kt
for i =1,2.
The amplification ratio is plotted in Fig. 7 and exhibits a peak value of approximately 1.7 in
the neighborhood of ki = 131,.2 and values small in relation to unity outside of this
neighborhood. In terms of the mode crossing set in Fig. 6, an increase in the
382
500 I I I I I I I I
r- -
r- -
r- -
100 r- """ -
r- -
50 I I I II I I I
50 100 2 500
k 1/ 7r
Figure 6. Mode crossings of the spring-supported beam in parameter space.
0
.~
~
c::
0
•.0:;
ro
0
1
:E
a.
E
«
0
50 100 2
500
k 1/ 7r
response is expected near kl = 1311l, which corresponds to the vertical asymptotes, and a
decrease is expected in the horizontal asymptotes.
The results and conclusions concerning the location of the mode crossings (equation
(35» and the behavior of the response (equation (36» have been derived entirely from the
analytical results of this paper. These results and conclusion are verified by numerically
evaluating the root-mean-square response using the exact modal properties and full modal
combination using standard numerical analysis procedures.
383
500
100
50
50 100 2 500
kd1r
Complete qualitative views of the response surface are shown in a contour plot in
Fig. 8 and in a three-dimensional perspective in Fig. 9 for 251r2 ~ k 1 , k2 ~ 2501r2 . As
predicted in the preceding discussion, significant changes of response levels occur at the
mode crossing set of parameter values. Both the reduced and amplified response levels
predicted by equation (36) are exhibited at the mode crossings, which can be seen by
comparing the mode crossing set in Fig. 6 with the response values in Figs. 8 and 9. It is
noted that for parameter values away from the mode crossing set, the response variations
becomes stable and nearly linear which supports the hypothesis that the critical
configurations for wide-band response of the system are found only in the mode crossing
set.
SUMMARY
Several new results have been established in this paper. First, critical configurations
of systems subjected to wide-band excitation have been identified. It has been shown that
these configurations correspond to mode crossings, which are defined in terms of the
natural frequencies of the system. Next, the dynamic characteristics of the system at mode
crossings was investigated using modal analysis. The key results of this section of the
paper are the analytical description of the curve-veering behavior at mode crossings.
Third, the wide-band response characteristics of two-mode systems at mode crossings was
examined and was found to be dependent upon modal interaction and the input and
response modal coefficients. Both amplification or reduction of responses was found to be
possible, and analytical results were obtained to predict the degree of response variation.
Finally, parallels with more complex multi-mode systems and two-mode system were
established, which allows the dynamic analyst to extract the key characteristics of the
systems' wide-band response.
All of the results are based on dynamic response characteristics, rather than user-
defined parameterizations, and reflect intrinsic dynamic characteristics of the problem.
The analytical expressions of this paper provide insight into the relationships between the
dynamic properties of the system, the modal characteristics, and the mean-square response.
This insight was demonstrated by two examples, in which all of the dominant
characteristics of the responses were identified with specific dynamic properties of the
systems.
The results of this paper can be applied to a number of related problems. First, the
response/parameter relationships can be applied to the problem of optimal design of
dynamic structural systems. Such relationships have already been used for harmonic
loads [1,3], and the new results in herein are applicable to design for stochastic loads.
Second, the basic approach can be used to identify other critical system configurations such
as those related to damping or nonlinear effects. Third, the knowledge of the critical
parameters and response/parameter relationships can be used to enhance stochastic finite
element analysis of dynamically loaded systems such as those described in references
[17,18]. Fourth, critical sets can be used in parameter reduction problems in which large
numbers of uncertain or design parameters pose computational difficulties. The critical set
385
can be used to identify a much smaller set of parameters or a small set of newly redefined
parameters which have the most influence in the response. Finally, the analytical results
may provide additional insight into curve veering investigations of nearly periodic
systems [7,8,19].
ACKNOWLEDGEMENTS
This research was supported by the National Science Foundation under Grant No.
CES-8707792, Dr. S.-C. Liu, Program Director and the Office of Naval Research under
Contract No. 88K-0514, Dr. A. J. Tucker, Scientific Officer. This support is gratefully
acknowledged.
REFERENCES
1. R. G. LOEWY 1984 Journal of the American Helicopter Society, 29, 4-30. Helicopter
vibrations: a technological perspective.
2. J. F. BALDWIN and s. G. HUTTON 1985 American Institute of Aeronautics and
Astronautics Journal, 23, 1737-1743. Natural modes of modified structures.
3. W. C. MILLS-CURRAN and L. A. SCHMIT 1985 American Institute of Aeronautics and
Astronautics Journal, 23, 132-138. Structural optimization with dynamic behavior
constraints.
4. B. P. WANG, A. B. PALAZZOLO, and W. D. PILKEY 1982 State of the Art Survey of Finite
Element Methods, ASME, New York, Chapter 8. Reanalysis, modal synthesis, and dy-
namic design.
5. C. H. HODGES 1982 Journal of Sound and Vibration, 82, 441-424. Confinement of
vibration by structural irregularity.
6. O. O. BENDIKSEN 1987 American Institute of Aeronautics and Astronautics Journal
25(9),1241-1248. Mode localization phenomena in large space structures.
7. C. PIERRE and E. H. DoWELL 1987 Journal of Sound and Vibration, 114, 549-564.
Localization of vibrations by structural irregularity.
8. C. H. HODGES and J. WOODHOUSE 1983 Journal of the Acoustical Society of America,
74,894-905. Vibration isolation from irregularity in a nearly periodic structure: theory
and measurements.
9. T. IGUSA AND A. DER KIUREGHIAN 1989 Journal of Engineering Mechanics, 114, 812-
832. Response of uncertain systems to stochastic excitation.
10. A. W. LEISSA 1974 Journal of Applied Mathematics and PhYSics (ZAMP) 25, 99-111.
On a curve veering aberration.
11. T. IGUSA Journal of Sound and Vibration (submitted paper). Critical configurations of
systems subjected to wide-band excitation.
12. Y. K. LIN 1976 Probabilistic Theory of Structural Dynamics, Krieger Publishing,
Huntington, New York.
386
P. Thoft-Christensen
University of Aalborg
Sohngaardsholmsvej 57, DK-9000 Aalborg, Denmark
ABSTRACT
In this paper a brief presentation of the state-of-the-art of reliability-based structural optimization
(RBSO) is given. Special emphasis is put on problems related to application of RBSO on real (large)
structures. Shape optimization, knowledge-based optimization and optimal inspection strategies
are briefly discussed. A list of 125 references is included in the appendix.
1. INTRODUCTION
RBSO has been an area of research which has grown strongly in the last two decades. In figure 1
(based on the references in the appendix) the number of papers published since 1960 are shown for
5-year periods. From a very slow start in 1960 a drastic increase is seen in in the years 1985-1989.
A similar development has taken place for classical (deterministic) structural optimization. This
paper is highly inspired by the references in the appendix. However, for the cases of simplicity
reference is only made to a few papers, namely when results are taken directly from these papers.
The authors are asked for understanding for this point of view.
REFERENCES
NUMBER
80.--------------------------------------,
60
40
20
Why this growing interest in structural optimization? First of all structural optimization is an
efficient methodology for design. It is a general and versatile tool for automatic design and it is
relatively easy to use for practicing engineers. It should also be mentioned that although a number
of approximations have to be made the essential features of the original optimization problems are
maintained. Therefore, structural optimization techniques have a number of advantages compared
388
with traditional design techniques, but clearly the quality of an optimal design is only as good as
the underlying analysis.
Let Z = (Zl,'" , ZN) be N optimization variables, e.g. dimensions of structural elements. Then
an element reliability-based optimization problem can be formulated in the following way
min W(z)
s.t. f3i(Z) ~ m
zi :S Zi :S zi
nin i = 1,2, ... m
i=1,2, ... ,N
} (1)
m
where W is the object function. f3i is the reliability index for (failure) element i and nin the
corresponding minimum acceptable value. zi and zi are simple lower and upper values for the
design variable Zi, i = 1, ... , m.
The corresponding formulation on systems reliability level K, K = 1,2, ... , can be formulated in
the following way
min W(z")
s.t. f3K ~ f3'K in
zi :S Zi :S zi
} (2)
where f3k is the systems reliability index on level K and f3'K in the corresponding minimum acceptable
value.
Notice that the main difference between (1) - (2) and classical structural optimization is that in
classical structural optimization the constraints are related to e.g. stresses and displacements, but
in RBSO to element or system reliability. In (1) - (2) only a single objective function is used. In
a more general formulation multi-objective functions are introduced.
Optimal design problems can be classified at four optimization levels depending on the nature of
the design variables z:
Levell: Cross-sectional optimization
• Sizing design variables
Level 2: Sbape optimization
• Sizing design variables
• Shape design variables
Level 3: Configuration optimization
• Sizing design variables
• Shape design variables
• Configuration variables
Level 4: Total optimization
• Sizing design variables
• Shape design variables
• Configuration variables
• Material selection variables
In RBSO most of the work is at level 1. Some work is done at level 2 and a little work at level 3.
To the author's knowledge, no work is done at level 4.
389
2. OPTIMIZATION OF LARGE COMPLEX STRUCTURES
An overview of optimization oflarge complex structures is given by Jensen & Thoft-Christensen [1].
When standard methods for optimization of normal-size problems (moderate number of variables
and constraints) are used on large problems ( high number of variables and constraints) then
numerous complications will usually occur. First of all, the fact that a large problem is being
considered will result in analysis of a vast amount of data. This primarily causes problems with
lacking internal memory and this implies frequent swapping of information between internal and
external memory. As a result of these methods based on the Hessian may encounter serious
problems.
Standard optimization methods may also fail because of numerical problems, lack of convergence
to a "wrong" solution, cycling or program errors. In the worst case the calculation time when using
traditional methods will grow exponentially with the number of design variables. This in itself puts
considerable limits to the size of the problem that can be analysed no matter if a (deterministic)
problem or an RBSO problem is considered.
In the literature several methods for optimization of large problems are described. These methods
can be divided into five groups:
I direct methods with linear constraints,
II indirect methods with linear constraints,
III direct methods with general constraints,
IV indirect methods with general constraints,
V methods suited for parallel somputers.
In RBSO the groups III - V are most relevant, since the reliability constraints are strongly non-
linear. Direct methods handling general constraints are the most widely used in optimization
of structures. Methods based on penalty functions, extended penalty functions, sequential linear
programming, sequential quadratic programming, reduced gradient, projected gradient, augmented
Lagrangian, various Newton-type methods and feasible directions are only some of the types used to
optimize problems in structural engineering. All of these methods work well on small to moderate-
sized problems and some can be extended to optimization of special types of large structures.
The indirect method with general constraints involves decomposition of the problem into smaller
parts (substructures) at 2 or more levels (multilevel). Each of these subsystems must have their
own goals (objective functions) and constraints. The standard form of interconnection between
substructures is the top-down hierarchical form. This means that a given subsystem controls the
systems at the level below and is itself controlled by the system at the level above. Main consi-
derations using this approach should be concerned with information flow between substructures
and the coordination of the problems to ensure fulfilment of the overall goal. However, decompo-
sition into smaller subproblems that can be independently optimized is rarely possible in practical
engineering problems. Optimizing one subproblem without taking into consideration interaction
with other subproblems may lead to none or non-optimal solutions.
The most well-known indirect methods are
• the model coordination method (Wismer [2], Kirsch [3])
• the goal-coordination method (Wismer [2], Kirsch [3])
• the linear decomposition method (Sobieszczanski-Sobieski et. al [4], [5], [6]).
To the author's knowledge, none of these methods has been used in RBSO only in classical struc-
tural optimization. The last-mentioned method seems to be suitable for RBSO problems and some
research is being performed in this area and will be published soon by Jensen & Thoft-Christensen
390
[7]. This method is described in detail in [4], [5], [6] and [1]. The method can be compared to the
design and organisation of a large structure. A coordinator divides the design of the structure into
smaller problems and assigns the task to smaller groups. Each group solves its design problems
with their own tools, and sends the result back to the coordinator. He analyses the results, coordi-
nates them, and he may change some of the parameters and send the problems back for re-analysis.
This iterative scheme continues until an optinum is achieved. This decomposition clearly has a
number of advantages in the subdivision of the structure. However, the method can diverge. It is
not clear how this method will work with reliability constraints but as mentioned earlier, research
is being performed in this area.
3. SHAPE OPTIMIZATION
To illustrate RBSO at optimization level 2 (shape optimization) consider the simple model of
a mono-tower platform shown in figure 2. This example is taken from Enevoldsen, Sl'lrensen &
Thoft-Christensen [8].
The RBSO problem (2) at systems level/{ = 1 is solved using (3min = 3.00. The objective function
W(z) is the steel volume between seabed and topside (initially W = 36.8 m 3 ). 11 stochastic
variables are used in the reliability modelling of the structure and two types of failure modes
(yielding failure and fatigue failure) are used, see Enevoldsen et al. [8] for details.
The shape optimzation problem is solved with three different reliability models with failure elements
in series
The optimal design corresponding to the three different reliability models is seen to be quite
different. The optimal design found with model 1 (extreme failure) has the lowest objective function
(steel volume). The optimal design corresponding to models 2 and 3 (fatigue failure and extreme
plus fatigue failure) has almost the same steel volumes indicating that fatigue failure is the most
significant failure mode.
The optimal shapes for models 1 and 2 are very different. The optimal design corresponding to
model 1 has the smallest diameter at the sea level, whereas the design corresponding to model
2 has the largest diameter at sea level. The reason is the different physical mechanisms in the
two failure modes. The optimal shape corresponding to model 3 has (as expected) the smallest
diameter at sea level. Although the fatigue failure mode is the most significant and the shape from
model 2 should be expected the diameter increases below sealevel due to the influence from the
extreme wave failure mode.
Compared with the initial structural design the steel volume is reduced by 12 % and the systems
reliability index is increased from 1.43 to 3.00.
t3
d2 P2
-----
P2 P2
t2 t2 t2
4. KNOWLEDGE-BASED OPTIMIZATION
Automatic RBSO will probably in general not be possible if optimization is taking place at opti-
mization level 3 (configuration optimization). It seems to be much more reasonable to use some
kind of interactive system, where expert knowledge is used to improve the design. In this chapter
a very simple example is shown where expert knowledge is used in an extremely simple way. This
392
example is taken from Thoft-Christensen [9], where more details can be found, and is based on an
M.Sc. thesis by Frisk & Poulsen [10]. 10 design variables are considered, namely 6 shape variables
ZI, ••• ,Z6 and 4 sizing variables Z7, ... ,ZlO (see figure 4). The structure has 19 tubular members
and each of them has 3 failure elements (failure modes), namely a yield failure element at the ends
of the beams and a stability failure element.
71 m
Figure 4. Frame structure with 6 shape design variables zl, ••. ,Z6 and 4 sizing design variables
Z7, ••• ,ZI0·
The optimization problem is formulated on element level and with the weight as objective function:
19
mm W(z) = 7.85 L li(z) ai(Z)
i=1
The start value of W is 148.75 tons and the smallest reliability index for any failure element is
(3 = 5.42. Optimal values for z are obtained after 27 iterations with the NLPQL algorithm. The
minimum weight is W = 104.49 tons and the smallest (3-value is (3 = 4.00. This lowest acceptable
reliability index (3 = 4:00 is obtained for 7 stability failure elements. The shape of the structure
in the initial state (iteration 0) after 8 iterations, after 20 iterations, and the optimal shape (after
27 iterations) is shown in figure 5.
It is obvious from figure 5 that the optimal solution is not optimal from an economic point of view.
It is expensive to produce the 3 tubular joints in the symmetry line. This result is typical for shape
optimization of structures where the weight is used as an objective function. It is, however, not
expedient to reformulate the optimization problem so that the production costs of e.g. tubular
joints are included. Formulation of the objective function will namely in such a case be very
complicated. It seems to be much more natural to use expert knowledge in the way described
below. As a simple example of expert knowledge consider the brace in figure 6, where, depending
on the position of the joint, a K-brace or an X-brace is considered most economic.
393
Iteration 0 8 20 27
W 148.75 t 117.06 t 104.67 t 104.49 t
Smallest f3 5.42 3.92 3.97 4.00
Figure 5. Iteration history.
- 7\
- X
- ~
Figure 6. Optimal braces.
State
State
Weight (tons) 106.05 110.41 110.39 112.70
Smallest f3 1.23 4.00 2.28 4.00
The same optimization problem is considered again, but now the expert knowledge illustrated in
figure 6 is included. The result of the optimization is shown in figure 7. The states 1 and 2 are
identical with the initial state and the optimal state in figure 5.
In state 3 the middle brace is fixed as an X-brace. By continued iteration state 4 is then obtained.
next the lowest" brace is fixed as a K-brace (state 5) and by renewed optimization state 6 is obtained.
Finally, the upper brace is fixed as an X-brace (state 7) and by optimization state 8 is obtained.
394
In figure 8 the variation of the smallest ,8-index and the weight W during the iteration are shown.
In the optimal design (state 8) only 4 failure elements have ,8 = 4.00.
P W, tons
6 160
5
2 4 6 8
r([
4
3 6
2 7 8
5 iteration iteration
0 80+---+---+---+---+_ _+--_+-_
10 20 30 40 50 60 o 10 20 30 40 50 60
Figure 8. Iteration history.
n 1
min
z,t,fj
GI + t;(GIN(q;)(I -- PF(Ti)) + GRE[Ri]) (1 + rV'
+ t;
n+l
GF(Ti)(PF(Ti) - PF(Ti-t} (1
1
+ rV'
s.t. ,8(T) ~ ,8lliin
(4)
Lti ::; t
n
qi is the inspection quality at inspection time Ti, i = 1, ... ,n. E[Ri] is the expected number of
repairs at the time Ti. PF(Ti) is the probability of failure at the time Ti and r is the inflation rate.
Extensive research in this area is being performed within the EC research programme BRITE. It
is believed that the optimal strategies for inspection and repair based on the formulation above
will result in substantial savings compared with traditional strate.e;ies.
395
6. REFERENCES
A ven, T.: Optimal Inspection and Replacement of a Coherent System. Microeletronics and
Reliability, Vol. 27, 1987, pp. 447-450.
Bourgund, U.: Reliability-Based Optimization of StruduT'l1 Sy,5tems. Springer Lecture Notes in
Engineering, Vol. 31, 1987, pp. 52-65.
Bourgund, U.: Structural Optimization Based on Advanced Reliability Analysis. Proceedings
Int. Conf. on Computer Aided Design of Structures: Applications (eds. C. A. Brebbia & S.
Hernandez). Computational Mechanics Publ., 1989, pp. 243-25l.
Brandt, A., S. Jendov & W. Marks: Probabilistic Approach to Reliability-Based Optimum
Design. Engineering Transactions, Polish Akademia NAUK, Vol. 32, 1984, pp. 57-74.
396
Brisighella, L., L. Simoni & P. Zaccaria: Optimum Elastoplastic Design under Environment.
ICASP 4, 1983, pp. 1261-1270.
Broding, W. C., F. W. Diederich & P. B. Parker: Structural Optimization and Design
Based on a Reliability Design Criterion. J. of Spacecraft, vol. 1, 1964, pp. 56-6l.
Burton, R. M. & G. T. Howard: Optimal System Reliability for a Mixed Series and Parallel
Structure. J. Math. Anal. and Appl., Vol. 28, 1969, pp. 370-382.
Carmichael, D. G.: Probabilistic Optimal Design of Framed Structures. Computer Aided De-
sign, Vol. 13, 1981, pp. 261-264.
Casciati, F. & L. Faravelli: Structural Reliability and Structural Design Optimization. ICOS-
SAR '85, Proceedings, III61-III70.
Cheng, F. Y. & C.-C. Chang: Optimality Criteria for Safety-Based Design. Proc. 5. ASCE-
EMD Specialty Conf. (eds. A. P. Boresi & K. P. Chong), Laramie, Wyoming, Vol. 1, 1984, pp.
54-57.
Cheng, F. Y. & C.-C. Chang: Optimum Design of Steel Buildings with Consideration of
Reliability. Proc. 4. Int. Conf. on Structural Safety and Reliability (eds. I. Ionishi, A. H.-S. Ang
& M. Shinozuka), Kobe, Japan, Vol. III, 1985, pp. 81-89.
Chong-Hong, H., S. Yasuyuki & I. Takuzo: Reliability of a Structure Using Change-
Constrained Programming. Bulletin of JSME, Vol. 21, 1978, pp. 37-43.
Corotis, R. B. & A. M. Nafday: Structural System Reliability Using Linear Programming and
Simulation. J. Struct. Eng., ASCE, Vol. 15, 1989.
Davidson, J. W., L. P. Felton & G. C. Hart: On Reliability-Based Structural Optimization
for Earthquakes. Computers & Structures, Vol. 12, 1980, pp. 99-105.
Davidson, J. W., L. P. Felton & G. C. Hart: Optimum Design of Structures with Random
Parameters. Computers & Structures, vol. 7, 1977, pp. 481-486.
Dufi'uaa, S. O. & A. Raouf: Mathematical Optimization Models for Multicharacteristic Repeat
Inspections. Appl. Math. Modelling, Vol. 13, 1988, pp. 408-412.
Ellis, J. H., R. B. Corotis & J. J. Zimmerman: Analysis of Structural System Reliability
with Chance Constrained Programming. Proceedings of Int. Conf. on Computer Aided Opti-
mum Design of Structures: Applications (eds. C. A. Brebbia & S. Hernandez). Computational
Mechanics Publ., 1989, pp. 253-263.
Enevoldsen, I., J. D. S~rensen & P. Thoft-Christensen: Shape Optimization of Mono-Tower
Offshore Platforms. Proc. Int. Conf. on Computer Aided Design of Structures: Applications (eds.
C. A. Brebbia & S. Hernandez). Computational Mechanics Publ., 1989, pp. 297-308.
Fagundo, F. E., M. Hoit & A. Soeiro: Probabilistic Design and Optimization of Reinforced
Concrete Frames. NATO ASI on Optimization and Decision Support Systems, Edinburgh, UK,
1989.
Feng, Y. S. & F. Moses: Optimum Design, Redundancy and Reliability of Structural Systems.
Computers & Structures, Vol. 24, 1986, pp. 239-25l.
Feng, Y. S. & F. Moses: A Method of Structural Optimization Based on Structural System
Reliability. J. Struct. Mech., Vol. 14, 1986, pp. 437-453.
Frangopol, D. M.: Alternative Solutions in Reliability-Based Optimum Design. Proc. 5. Eng.
Mech. Div. Specialty Conf., EM Div.jASCE, Lavania, Wyoming, 1984, pp. 1232-1236.
Frangopol, D. M.: Interactive Reliability-Based Structural Optimization. Computers & Struc-
tures, Vol. 19, 1984, pp. 559-563.
397
Madsen, H. 0., J. D. S~rensen & R. Olesen: Optimal Inspection Planning for Fatigue
Damage of Offshore Structures. ICOSSAR'89, San Francisco, 1989.
Madsen, H. O. & J. D. S~rensen: Probability-Based Optimization of Fatigue Design, Inspec-
tion and Maintenance. Proc. on Integrity of Offshore Structures, Glasgow, 1990.
Mahadevan, S.: Stochastic Finite Element-Based Structural Reliability Analysis and Optimiza-
tion. Ph. D. Thesis, Georgia Institute of Technology, Atlanta, 1988.
Mahadevan, S. & A. Haldar: Stochastic Finite Element-Based Optimum Design of Large
Structures. Proceedings of Int. Conf. on Computer Aided Design of Structures: Applications
(eds. C. A. Brebbia & S. Hernandez). Computational Mechanics Publ., 1989, pp. 265-274.
Mahadevan, S. & A. Haldar: Efficient Algorithm for Stochastic Structural Optimization. J.
Struct. Engineering, Vol. 115, 1989, pp. 1579-1598.
Mjelde, K. M.: Inspection-Optimization for Serial Systems of Dependent Elements. Structural
Safety, Vol. 2, 1984, pp. 119-125.
Mohammadi, J.: Seismic Safety of Lifelines - An Optimum Design Method. Structural Safety,
Vol. 2, 1985, pp. 301-308.
Moses, F.: Approaches to Structural Reliability and Optimization. An Int. to Structural Opti-
mization (ed. M. Z. Cohn), Solid Mechanics Div., University of Waterloo, 1969, pp. 81-120.
Moses, F.: Sensitivity Studies in Structural Reliability. Structural Reliability and Codified Design
(ed. N. C. Lind). Solid Mech. Div., University of Waterloo, 1970, pp. 1-17.
Moses, F., R. Fox & G. Goble: Mathematical Programming Applications in Structural Design.
SM Study, 1970, pp. 379-39l.
Moses, F.: Optimization with Reliability Constraints. Automated Design and Optimization,
Trondheim Press, 1972.
Moses, F. & D. E. Kinser: Optimum Structural Design with Failure Probability Constraints.
AIAA Journal, Vol. 5, 1967, pp. 1152-1158.
Moses, F. & J. D. Stevenson: Reliability-Based Structural Design. J. Struct. Div., ASCE,
Vol. 96, 1970, pp. 221-244.
Moses, F.: Structural System Reliability and Optimization. Computers & Structures, Vol. 7,
1977, pp. 283-290.
Munford, A. G. & A. K. Shahani: A Nearly Optimal Inspection Policy. Operational Research
Quarterly, Vol. 23, 1972, pp. 373-379.
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: A Method for Reliability Analysis and
Optimal Design of Structural Systems. Proc. 12. Int. Symp. on Space Technology and Science,
1977, pp. 1047-1054.
Murotsu, Y., M. Kishi, H. Okada, M. Yonezawa & K. Taguchi: Probabilistic Optimum
Design of Plane Structures. Proc. IFIP Conf. on System Modelling and Optimization (ed. P.
Thoft-Christensen). Lecture Notes in Control and Information Sciences, Vol. 59, 1984, pp. 545-
554.
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: Optimum Structural Design Based on
Extended Reliability Theory. Proc. 11fh Congress of Int. Council of the Aeronautical Sciences,
Lisbon, 1978-79, pp. 572-58l.
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: Optimum Structural Design under Con-
straint on Failure Probability. ASME Publication 79-DET-114, 1979, pp. 1-12.
400
Murotsu, Y., M. Yonezawa, F. Oba & K. Niwa: Optimum Design Problems in Reliability-
Based Structural Design. HOPE Int. JSME Symp., Tokyo, Oct./Nov. 1977, pp. 461-466.
Murthy, P. N. & G. Subrimanian: Minimum Weight Analysis Based on Structural Reliability.
AIAA Journal, Vol. 6, 1968, pp. 2037-2038.
Nakagawa, T. & K. Yasui: Approximate Calculation of Optimal Inspection Times. J. Opera-
tional Research Society, Vol. 31, 1980.
Nakib, R. & D. M. Frangopol: Reliability-Based Analysis and Optimization of Ductile Struc-
tural Systems. Structural Research Series 8501, Dept. Civil, Env. and Archit. Eng., University of
Colorado, Boulder, 1985.
Nikolaidis, E. & R. Burdisso: Reliability-Based Optimization: A Safety Index Approach.
Computers & Structures, Vol. 28,1988, pp. 781-788.
Ohnishi, M., H. Kawai & H. Mine: An Optimal Inspection and Replacement Policy for a
Deteriorating System. J. Applied Probability, Vol. 23, 1986, pp. 973-988.
Parimi, S. R. & M. Z. Cohn: Optimal Criteria in Probabilistic Structural Design. Optimization
in Structural Design (eds. A. Sawczuk & Z. Mroz), Springer-Verlag, 1975, pp. 278-293.
Parimi, S. R. & M. Z. Cohn: Optimal Solutions in Probabilistic Structural Design. Part I:
Theory. Part II: Applications. J. de Mecanique Appliquee, Vol. 2, 1978, pp. 47-92.
Rackwitz, R.: Optimization of Measures for Quality Assurance. IABSE Symposium, Tokyo
1986, IABSE Report, Vol. 51, pp. 91-100.
Rashed, R. & F. Moses: Application of Linear Programming to Structural System Reliability.
Computers & Structures, Vol. 24, 1986, pp. 375-384.
Rojioni, K. B. & G. L. Bailey: A Comparison of Reliability-Based Optimum Design and IASC
Code Based Design. Proc. Int. Symp. on Optimal Structural Design, Tucson, Ariz., 1981.
Rosenblueth, E.: Reliability-Based Optimum Design of Offshore Platforms. Int. J. of Prob. and
Statistics in Eng. Research and Development. M. Dekker Inc., Vol. 1, 1983.
Rosenblueth, E. & E. Mendoza: Reliability Optimization in Isostatic Structures. J. Eng.
Mech. Div., ASCE, Vol. 97, 1971, pp. 1625-1640.
Schueller, G. I.: Reliability-Based Optimum Design of Offshore Platforms. Int. J. of Prob. and
Statistics in Eng. Research and Development. M. Dekker Inc., Vol. 1, 1983.
Sherif, Y. S. & M. L. Smith: Optimal Maintenance Models for Systems Subject to Failure - A
Review. Naval Research Logistics Quarterly, USA, Vol. 28,1981, pp. 47-74.
Shinozuka, M. & T. N. Yang: Optimum Structural Design Based on Reliability and Proof-Load
Testing. NASA Tech. Rept. 1032-1042, 1969.
Shinozuka, M., J. N. Yong & E. Heer: Optimal Structural Design Based on Reliability
Analysis. 8 Int. Conf. on Space Science and Tech., Japan, 1969.
Shiraishi, N. & H. Furuta: Safety Analysis and Minimum Weight Design of Rigid Frames Based
on Reliability Concepts. Memoirs of the Faculty of Engineering, Kyoto University, Vol. XLI, Part
4, 1979, pp. 474-479.
Simoes, L. M. C.: Reliability-Based Plastic Synthesis of Reinforced Concrete Slabs. Proc. Int.
Conf. on Computer Aided Optimum Design of Structures: Applications (eds. C. A. Brebbia & S.
Hernandez), 1989, pp. 285-295.
401
Simoes, L. M. C.: Reliability-Based Plastic Synthesis of Portal .Frames. NATO ASI on Optimi-
zation and Decision Support Systems, Edinburgh, UK, 1989.
Skjong, R.: Reliability-Based Optimization of Inspection Strategies. Proc. ICOSSAR'85, 1985,
pp. III614-III618.
Soltani, M. & R. B. Corotis: Failure Cost Design of Structural Systems. Structural Safety,
Vol. 5, 1988, pp. 239-252.
Stevenson, J. D.: Reliability Analysis and Optimum Design of Structural Systems with Appli-
cations to Rigid .Frames. Rep. No. 14, Structures and Mech. Design Div., Case Western Reserve
University, Cleveland, 1967.
Surahman, A. & K. B. Rojiani: Reliability-Based Optimum Design of Concrete .Frames. J.
Structural Eng., ASCE, Vol. 109, 1981, pp. 741-757.
Switzky, H.: Minimum Weight Design with Structural Reliability. Proc. 5 Annual Structures
and Materials Conf., Am. Inst. of Aeronautics and Astronautics, Palm Springs, CaL, 1964.
Switsky, H.: Minimum Weight with Structural Reliability. J. of Aircraft, Vol. 2, 1965, pp.
228-232.
S!ZIrensen, J. D.: PRADSS: Program for Reliability Analysis and Design of Structural Systems.
Structural Reliability Theory, Paper No. 36, The University of Aalborg, Denmark, 1988.
S!ZIrensen, J. D.: Probabilistic Design of Offshore Structural Systems. Proc. ASCE Specialty
Conf. on Probabilistic Methods in Civil Engineering (ed. P. D. Spanos), 1988, pp. 189-192.
S!ZIrensen, J. D.: Reliability-Based Optimization of Structural Elements. Structural Reliability
Theory, Paper No. 18, The University of Aalborg, Denmark, 1986.
S!ZIrensen, J. D.: Reliability-Based Optimization of Structural Systems. 13th IFIP Conference
on "System Modelling and Optimization", Tokyo, Japan, 1987.
S!ZIrensen, J. D.: Optimal Design with Reliability Constraints. Structural Reliability Theory,
Paper No. 45, The University of Aalborg, Denmark, 1988.
S!ZIrensen, J. D. & I. Enevoldsen: Sensitivity Analysis in Reliability-Based Shape Optimization.
NATO ASIan Optimization and Decision Support Systems, Edinburgh, UK, 1989.
S!ZIrensen, J. D. & P. Thoft-Christensen: Reliability-Based Optimization of Parallel Systems.
Proc. 14th IFIP TC-7 Conf. on System Modelling and Optimization, Leipzig, DDR, July 1989.
S!ZIrensen, J. D. & P. Thoft-Christensen: Integrated Reliability-Based Optimal Design of
Structures. Proc. IFIP Conf. on Reliability and Optimization of Structural Systems (ed. P.
Thoft-Christensen). Lecture Notes in Engineering, Vol. 33,1987. Springer-Verlag, pp. 385-398.
S!ZIrensen, J. D. & P. Thoft-Christensen: Inspection Strategies for Concrete Bridges. Proc.
IFIP Conf. on Reliability and Optimization of Structural Systems '88 (ed. P. Thoft-Christensen).
Lecture Notes in Engineering, Vol. 48, Springer-Verlag, 1989, pp. 325-335.
S!ZIrensen, J. D. & P. Thoft-Christensen: Structural Optimization with Reliability Con-
straints. Proc. IFIP Conf. on System Modelling and Optimization (eds. A. Prekopa, J. Szelezsan
& B. Strazicky). Lecture Notes in Engineering, Vol. 84, Springer-Verlag, 1986, pp. 876-885.
Thoft-Christensen, P.: Applications of Structural Systems Reliability Theory in Offshore En-
gineering. State-of-the-Art. Proc. Int. Symp. on Integrity of Offshore Structures, Glasgow, UK.
Elsevier Applied Science, 1987, pp. 1-23.
Thoft-Christensen, P.: Application of Optimization Methods in Structural Systems Reliability
Theory. Proc. IFIP Conf. on System Modelling and Optimization (eds. M. Iri & K. Yajimi).
Lecture Notes in Control and Information Sciences, Vol. 113, Springer-Verlag, 1988, pp. 484-497.
402
For information about Vols. 1-39 please contact your bookseller or Springer-Verlag.