Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

7.

RISK/RELIABILITY-BASED HYDRAULIC
ENGINEERING DESIGN
Yeou-Koung Tung

Department of Civil Engineering

Hong Kong University of Science and Technology

Clear Water Bay Kowloon,

Hong Kong

7.1. INTRODUCTION
7.1.1. Uncertainties in Hydraulic Engineering Design
In designing hydraulic engineering systems, uncertainties arise in various aspects including, but not limited to, hydraulic,
hydrologic, structural, environmental, and socioeconomical aspects. Uncertainty is attributed to the lack of perfect knowledge
concerning the phenomena and processes involved in problem definition and resolution. In general, uncertainty arising because
of the inherent randomness of physical processes cannot be eliminated and one has to live with it. On the other hand,
uncertainties, such as those associated with the lack of complete knowledge about processes, models, parameters, data, and
so on, can be reduced through research, data collection, and careful manufacturing.

Uncertainties in hydraulic engineering system design can be divided into four basic categories: hydrologic, hydraulic, structural,
and economic (Mays and Tung, 1992). Hydrologic uncertainty for any hydraulic engineering problem can further be classified
into inherent, parameter, or model uncertainties. Hydraulic uncertainty refers to the uncertainty in the design of hydraulic
structures and in the analysis of the performance of hydraulic structures. Structural uncertainty refers to failure from structural
weaknesses. Economic uncertainty can arise from uncertainties in various cost items, inflation, project life, and other intangible
factors. More specifically, uncertainties in hydraulic design could arise from various sources (Yen et al., 1986) including natural
uncertainties, model uncertainties, parameter uncertainties, data uncertainties, and operational uncertainties.

The most complete and ideal way to describe the degree of uncertainty of a parameter, a function, a model, or a system in
hydraulic engineering design is the probability density function (PDF) of the quantity subject to uncertainty. However, such a
probability function cannot be derived or found in most practical problems. Alternative ways of expressing the uncertainty of a
quantity include confidence intervals or statistical moments. In particular, the second order moment, that is, the variance or
standard deviation, is a measure of the dispersion of a random variable. Sometimes, the coefficient of variation, defined as the
ratio of standard deviation to the mean, is also used.

The existence of various uncertainties (including inherent randomness of natural processes) is the main contributor to the
potential failure of hydraulic engineering systems. Knowledge of uncertainty features of hydraulic engineering systems is
essential for assessing their reliability.

In hydraulic engineering design and analysis, the decisions on the layout, capacity, and operation of the system largely depend
on the system response under some anticipated design conditions. When some of the components in a hydraulic engineering
system are subject to uncertainty, the system’s responses under the design conditions cannot be assessed with certainty. An
engineer should consider various criteria including, but not limited to, the cost of the system, failure probability, and
consequences of failure, such that a proper design can be made for the system.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
In hydraulic engineering design and analysis, the design quantity and system output are functions of several system
parameters not all of which can be quantified with absolute certainty. The task of uncertainty analysis is to determine the
uncertainty features of the system outputs as a function of uncertainties in the system model and in the stochastic parameters
involved. Uncertainty analysis provides a formal and systematic framework to quantify the uncertainty associated with the
system output. Furthermore, it offers the designer useful insights with regard to the contribution of each stochastic parameter
to the overall uncertainty of the system outputs. Such knowledge is essential in identifying the “important” parameters to which
more attention should be given to better assess their values and, accordingly, to reduce the overall uncertainty of the system
outputs.

7.1.2. Reliability of Hydraulic Engineering Systems


All hydraulic engineering systems placed in a natural environment are subject to various external stresses. Theresistance or
strength of a hydraulic engineering system is its ability to accomplish the intended mission satisfactorily without failure when
subject to loading of demands or external stresses. Failure occurs when the resistance of the system is exceeded by the load.
From the previous discussions on the existence of uncertainties, the capacity of a hydraulic engineering system and the
imposed loads, more often than not, are random and subject to some degree of uncertainty. Hence, the design and operation of
hydraulic engineering systems are always subject to uncertainties and potential failures.

The reliability, ps, of a hydraulic engineering system is defined as the probability of nonfailure in which the resistance of the
system exceeds the load; that is,

(7.1)

where P(·) denotes the probability. The failure probability, pf, is the compliment of the reliability which can be expressed as

(7.2)

In hydraulic engineering system design and analysis, loads generally arise from natural events, such as floods and storms,
which occur randomly in time and in space. A common practice for determining the reliability of a hydraulic engineering system
is to assess the return period or recurrence interval of the design event. In fact, the return period is equal to the reciprocal of the
probability of the occurrence of the event in any one time interval. For most engineering applications, the time interval chosen is
1 year so that the probability associated with the return period is the average annual probability of the occurrence of the event.
Flood frequency analysis, using the annual maximum flow series, is a typical example of this kind of application. Hence, the
determination of return period depends on the time period chosen (Borgman, 1963). The main disadvantage of using the return
period method is that reliability is measured only in terms of time of occurrence of loads without considering the interactions
with the system resistance (Melchers, 1987).

Two other types of reliability measures that consider the relative magnitudes of resistance and anticipated load (called design
load) are frequently used in engineering practice. One is the safety margin (SM), defined as the difference between the
resistance (R) and the anticipated load (L), that is,

(7.3)

The other is called the safety factor (SF), a ratio of resistance to load, which is defined as

(7.4)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Yen (1979) summarized several types of safety factors and discussed their applications to hydraulic engineering system
design.

There are two basic probabilistic approaches to evaluate the reliability of a hydraulic engineering system. The most direct
approach is a statistical analysis of data of past failure records for similar systems. The other approach is through reliability
analysis, which considers and combines the contribution of each factor potentially influencing the failure. The former is a
lumped system approach requiring no knowledge about the behavior of the facility or structure nor its load and resistance. For
example, dam failure data show that the overall average failure probability for dams of all types over 15 m height is around 10–
3 per dam year (Cheng, 1993). In many cases, this direct approach is impractical because (1) the sample size is too small to be

statistically reliable, especially for low probability/high consequence events; (2) the sample may not be representative of the
structure or of the population; and (3) the physical conditions of the dam may be non–stationary, that is, varying with respect to
time. The average risk of dam failure mentioned above does not differentiate concrete dams from earthfill dams, arch dams
from gravity dams, large dams from small dams, or old dams from new dams. If one wants to know the likelihood of failure of a
particular 10–year–old double–curvature arch concrete high dam, one will most likely find failure data for only a few similar
dams, this is insufficient for any meaningful statistical analysis. Since no dams are identical and dam conditions change with
time, in many circumstances, it may be more desirable to use the second approach by conducting a reliability analysis.

There are two major steps in reliability analysis: (1) to identify and analyze the uncertainties of each contributing factor; and (2)
to combine the uncertainties of the stochastic factors to determine the overall reliability of the structure. The second step, in
turn, may proceed in two different ways: (1) directly combining the uncertainties of all factors, or (2) separately combining the
uncertainties of the factors belonging to different components or subsystems to evaluate first the respective subsystem
reliability and then combining the reliabilities of the different components or subsystems to yield the overall reliability of the
structure. The first way applies to very simple structures, whereas the second way is more suitable for complicated systems.
For example, to evaluate the reliability of a dam, the hydrologic, hydraulic, geotechnical, structural, and other disciplinary
reliabilities could be evaluated separately first and then combined to yield the overall dam reliability. Or, the component
reliabilities could be evaluated first, according to the different failure modes, and then combined. Vrijling (1993) provides an
actual example of the determination and combination of component reliabilities in the design of the Eastern Scheldt Storm
Surge Barrier in The Netherlands.

The main purpose of this chapter is to demonstrate the usage of various practical uncertainty and reliability analysis techniques
through worked examples. Only the essential theories of the techniques are described. For more detailed descriptions of the
methods and applications, see Tung (1996).

7.2. TECHNIQUES FOR UNCERTAINTY ANALYSIS


In this section, several analytical methods are discussed that would allow an analytical derivation of the exact PDF and/or
statistical moments of a random variable as a function of several random variables. In theory, the concepts described in this
section are straightforward. However, the success of implementing these procedures largely depends on the functional relation,
forms of the PDFs involved, and analyst’s mathematical skill. Analytical methods are powerful tools for problems that are not
too complex. Although their usefulness is restricted in dealing with real life complex problems, situations do exist in which
analytical techniques could be applied to obtain exact uncertainty features of model outputs without approximation or
extensive simulation. However, situations often arise in which analytical derivations are virtually impossible. It is, then, practical
to find an approximate solution.

7.2.1. Analytical Technique: Fourier and Exponential Transforms


The Fourier and exponential transforms of a PDF, fx(x), of a random variable X are defined, respectively, as

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(7.5a)

and

(7.5b)

where and E(·) is the expectation operator; E[eisX ] and E[esX] are called, respectively, the characteristic function and
moment generating function, of the random variable X. The characteristic function of a random variable always exists for all
values of the arguments whereas for the moment generating function this is not necessarily true. Furthermore, the
characteristic function for a random variable under consideration is unique. In other words, two distribution functions are
identical if and only if the corresponding characteristic functions are identical (Patel et al., 1976). Therefore, given a
characteristic function of a random variable, its probability density function can be uniquely determined through the inverse
transform as

(7.6)

The characteristic functions of some commonly used PDFs are shown in Table 7.1. Furthermore, some useful operational
properties of Fourier transforms on a PDF are given in Table 7.2

Using the characteristic function, the rth order moment about the origin of the random variableX can be obtained as

(7.7)

Fourier and exponential transforms are particularly useful when random variables are independent and linearly related. In such
cases, the convolution property of the Fourier transform can be applied to derive the characteristic function of the resulting
random variable. More specifically, consider that W = X1 + X2 + . . . +XN and all Xs are independent random variables with
known PDF, fj(x), j = 1, 2, . . . , N. The characteristic function of W then can be obtained as

(7.8a)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Table 7.1 Characteristic Functions of Some Commonly Used Distributions

Table 7.2 Operation Properties of Fourier Transform on a PDF

Property PDF Random Variable Fourier Transform

Standard fX(x) X

Scaling fX(ax) X

Linear af X(x) X

Translation 1 eaxfX(x) X

Translation 2 fX(x − a) X

and

(7.8b)

which is the product of the characteristic and moment generating functions of each individual random variable. The resulting
characteristic function or moment generating function for W can be used in Eq. (7.7) to obtain the statistical moments of any
order for the random variable W. Furthermore, the inverse transform of according to Eq. (7.6), can be made to derive the
PDF of W, if it is analytically possible.

7.2.2. Analytical Technique: Mellin Transform

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
When the random variables in a function W = g(X) are independent and nonnegative and the function g(X) has a multiplicative
form as

(7.9)

the Mellin transform is particularly attractive for conducting uncertainty analysis (Tung, 1990). The Mellin transform of a PDF
fX(x), where x is positive, is defined as

(7.10)

where MX(s) is the Mellin transform of the function fX(x) (Springer 1979). Therefore, the Mellin transform provides an alternative
way to find the moments of any order for non-negative random variables.

Similar to the convolutional property of the exponential and Fourier transforms, the Mellin transform of the convolution of the
PDFs associated with multiple independent random variables in a product form is simply equal to the product of the Mellin
transforms of individual PDFs. In addition to the convolution property, the Mellin transform has several useful operational
properties as summarized in Tables 7.3 and 7.4. Furthermore, the Mellin transform of some commonly used distributions are
summarized in Table 7.5.

Example 7.1. Manning’s formula is frequently used for determining the flow capacity of storm sewer by

Table 7.3 Operation Properties of the Mellin Transform on a PDF

Property PDF f X(x) Random Variable Mellin Transform

Source: From Park (1987).

Standard fX(x) X Mx(s)

Scaling fX(ax) X a–sM x(s)

Linear af X(x) X a Mx(s)

Translation xafX(x) X Mx(a + s)

Exponentiation fX(xa) X a–1 M x(s/a)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Table 7.4 Mellin Transform of Products and Quotients of Random Variables*

Random Variable PDF Given MW(s)

Source: From Park (1987). a, b, c: constants ; X, Y, W: random variables.

W=X fX(x) MX(s)

W = X/b fX(x) MX(bs − b + 1)

W = 1/X fX(x) MX(2–s)

W = XY fX(x), gY (y) MX(s)MY (s)

W = X/Y fX(x), gY (y) MX(s)MY (2–s)

W = aX bYc fX(x), gY (y) as − 1M X(bs − b + 1)MY (c s – c + 1)

Table 7.5 Mellin Transforms for Some Commonly Used Probability Density Functions

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
where Q is flow rate (ft3 /s), n is the roughness coefficient, D is the sewer diameter (ft), and S is pipe slope (ft/ft). Assume that
all three model parameters are independent random variables with the following statistical properties. Compute the mean and
variance of the sewer flow capacity by the Mellin transform.

Parameter Distribution

n Uniform distribution with lower bound 0.0137 and upper bound 0.0163

D Triangular distribution with lower bound 2.853, mode 3.0, and upper bound 3.147

S Uniform distribution with bounds (0.00457, 0.00543)

Referring to Table 7.4, the Mellin transform of sewer flow capacity, MQ(s), can be derived as

For roughness coefficient n having a uniform distribution, from Table 7.5, one obtains

For sewer diameter D with a triangular distribution, one obtains

For sewer slope S with a uniform distribution, one obtains

Substituting individual terms into MQ(s) results in the expression of the Mellin transform of sewer flow capacity specifically for
the distributions associated with the three stochastic model parameters.

Based on the information given, the Mellin transforms of each stochastic model parameter can be expressed as

The computations are shown in the following table:

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
s=2 s=3

0.463 s −1= 0.463 0.463 0.2144

Mn(–s + 2) Mn(0) = 66.834 Mn(–1) = 4478.080

MD(2.67s − 1.67) MD(3.67) = 18.806 MD(6.34) = 354.681

Mn(0.5s + 0.5) MS (1.50) = 0.0707 MS (2.00) = 0.005

Therefore, the mean sewer flow capacity can be determined as

The second moment about the origin of the sewer flow capacity is

The variance of the sewer flow capacity can then be determined as

with the standard deviation being

7.2.3. Approximate Technique: First-Order Variance Estimation


(FOVE) Method
The FOVE method, also called the variance propagation method (Berthouex, 1975), estimates uncertainty features of a model
output based on the statistical properties of the model's random variables. The basic idea of the method is to approximate a
model involving random variables by the Taylor series expansion.

Consider that a hydraulic or hydrologic design quantity W is related to N random variables X1 , X2 , . . ., XN as

(7.11)

where X = (X1 , X2 , . . ., XN) t, an N–dimensional column vector of random variables, the superscript t represents the transpose of
a matrix or vector. The Taylor series expansion of the function g(X) with respect to the means of random variables X = μ in the
parameter space can be expressed as

(7.12)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
where μi is the mean of the ith random variable Xi and ε represents the higher order terms. The first–order partial derivative
terms are called the sensitivity coefficients, each representing the rate of change of model output W with respect to unit change
of each variable at μ.

Dropping the higher-order terms represented by ε, Eq. (7.12) is a second–order approximation of the model g(X). The
expectation of model output W can be approximated as

(7.13)

and the variance of W = g(X) can be expressed as

(7.14)

As can be seen from Eq. (7.14), when random variables are correlated, the estimation of the variance of W using the second–
order approximation would require knowledge about the cross–product moments among the random variables. Information on
the cross–product moments are rarely available in practice. When the random variables are independent, Eqs. (7.13) and (7.14)
can be simplified, respectively, to

(7.15)

and

(7.16)

Referring to Eq. (7.16), the variance of W from a second-order approximation, under the condition that all random variables are
statistically independent, would require knowledge of the third moment. For most practical applications where higher order
moments and cross–product moments are not easily available, the first order approximation is frequently adopted. In the area
of structural engineering, the second order methods are commonly used (Breitung, 1984; Der Kiureghian et al., 1987 Wen, 1987).

By truncating the second and higher–order terms of the Taylor series, the first–order approximation ofW at X = μ is

(7.17)

and

(7.18)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
in which is an N–dimensional vector of sensitivity coefficients evaluated at μ; C(X) is the variance covariance
matrix of the random vector X. When all random variables are independent, the variance of model outputW can be
approximated as

(7.19)

in which represents the standard deviation and D = diag a diagonal matrix of variances of involved random
variables. From Eq. (7.19), the ratio indicates the proportion of overall uncertainty in the model output
contributed by the uncertainty associated with the random variable Xi.

Example 7.2. Referring to Example 7.1, the uncertainty associated with the sewer slope due to installation error is 5 percent of
its intended value 0.005. Determine the uncertainty of the sewer flow capacity using the FOVE method for a section of 3 ft
sewer with a 2 percent error in diameter due to manufacturing tolerances. The roughness coefficient has the mean value 0.015
with a coefficient of variation 0.05. Assume that the correlation coefficient between the roughness coefficient n and sewer
diameter D is –0.75. The sewer slope S is uncorrelated with the other two random variables.

Solution: The first–order Taylor series expansion of Manning's formula about n = μn = 0.015, D = μD = 3.0, and S = μS= 0.005,
according to Eq. (7.12), is

Based on Eq. (7.17), the approximated mean of the sewer flow capacity is

According to Eq. (7.18), the approximated variance of the sewer flow capacity is

The above expression reduces to

because Cov(n, S) = Cov(D, S) = 0. Since the standard deviations of roughness, pipe diameter, and slope are

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
the variance of the sewer flow capacity can be computed as

Hence, the standard deviation of the sewer flow capacity is which is 10.0 percent of the estimated mean
sewer flow capacity.

Without considering correlation between n and D, = 16.79 − 6.74 = 10.05 which underestimates the variance of the sewer
flow capacity. The percentages contribution of uncertainty of n, D, and S to the overall uncertainty of the sewer flow capacity
under the uncorrelated condition are, respectively, 41.8 percent, 47.7 percent, and 10.5 percent. The uncertainty associated
with the sewer slope contributes less significantly to the total sewer flow capacity uncertainty as compared with the other two
random variables even though it has the highest sensitivity coefficient among the three. This is because the variance of S,
Var(S), is smaller than the variances of the other two random variables.

7.2.4. Approximate Technique: Rosenblueth’s Probabilistic Point


Estimation (PE) Method
Rosenblueth’s probabilistic point estimation (PE) method is a computationally straightforward technique for uncertainty
analysis. It can be used to estimate statistical moments of any order of a model output involving several random variables
which are either correlated or uncorrelated. Rosenblueth’s PE method was originally developed for handling random variables
that are symmetric (Rosenblueth, 1975). It was later extended to treat nonsymmetric random variables (Rosenblueth, 1981).

Consider a model, W = g(X), involving a single random variable X whose first three moments or probability density function
(PDF)/ probability mass function (PMF) are known. Referring to Fig. 7.1, Rosenblueth’s PE method approximates the original
PDF or PMF of the random variable X by assuming that the entire probability mass of X is concentrated at two points x– and x+.
Using the two point approximation, the locations of x– and x+ and the corresponding probability masses p– and p+ are
determined to preserve the first three moments of the random variable X. Without changing the nature of the original problem, it
is easier to deal with the standardized variable, X' = (X–μ)/σ, which has zero mean and unit variance. Hence, in terms of X', the
following four simultaneous equations can be established to solve for x'– , x'+, p– , and p+:

(7.20a)

(7.20b)

(7.20c)

(7.20d)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 7.1 Schematic diagram of Rosenblueth’s PE method in univariate case (Tung, 1996)

in which and γ is the skew coefficient of the random variable X. Solving Eqs. (7.20a-d)
simultaneously, one obtains

(7.21a)

(7.21b)

(7.21c)

(7.21d)

When the distribution of the random variable X is symmetric, that is, γ = 0, then Eqs. (7.21a–d) are reduced to x'– = x'+ = 1 and
p– = p+ = 0.5. This implies that, for a symmetric random variable, the two points are located at one standard deviation to either
side of the mean with equal probability mass assigned at the two points.

From x'– and x'+ the two points in the original parameter space, x– and x+, can respectively be determined as

(7.22a)

(7.22b)

Based on x– and x+, the values of the model W = g(X) at the two points can be computed, respectively, as w – = g(x–) and w + =
g(x+). Then, the moments about the origin of W = g(X) of any order can be estimated as

(7.23)

Unlike the FOVE method, Rosenblueth’s PE estimation method provides added capability allowing analysts to account for the
asymmetry associated with the PDF of a random variable. Karmeshu and Lara Rosano (1987) show that the FOVE method is a
first order approximation to Rosenblueth’s PE method.

th

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
In a general case where a model involves N correlated random variables, the mth moment of the model output W = g(X1 , X2 , …,
XN) about the origin can be approximated as

(7.24)

in which the subscript δi a sign indicator and can only be + or − representing the random variableXi having the value of
respectively; the probability mass at each of the 2N points, p(δ1, δ 2,…, δN) can be
approximated by

(7.25)

with

(7.26)

where ρij is the correlation coefficient between random variables Xi and Xj. The number of terms in the summation of Eq. (7.24)
is 2N which corresponds to the total number of possible combinations of + and – for allN random variables.

Example 7.3 Referring to Example 7.2, assume that all three model parameters in Manning’s formula are symmetric random
variables. Determine the uncertainty of sewer flow capacity by Rosenblueth’s PE method.

Solution: Based on Manning’s formula for the sewer, Q = 0.463 n–1 D2.67 S 0.5, the standard deviation of the roughness
coefficient, sewer diameter, and pipe slope are

With N = 3 random variables, there are a total of 23 = 8 possible points to be considered by Rosenblueth’s method. Since all
three random variables are symmetric, their skew coefficients are equal to zero. Therefore, according to Eqs. (7.21a–b), n'– =
n'+ = D'– = D'+ = S'– = S'+ = 1 and the corresponding values of roughness coefficient, sewer diameter, and pipe slope are

Substituting the values of n– , n+, D– , D+, S – , and S + into Manning’s formula to compute the corresponding sewer capacities, one
has, for example,

Similarly, the values of sewer flow capacity for the other seven points are given in the following table:

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Point n D S Q (ft3s) p

1 + + + 42.19 0.03125

2 + + − 40.14 0.03125

3 + − + 37.92 0.21875

4 + − − 36.07 0.21875

5 − + + 46.64 0.21875

6 − + − 44.36 0.21875

7 − − + 41.91 0.03125

8 − − − 39.87 0.03125

Because the roughness coefficient and sewer diameter are symmetric, correlated random variables, the probability masses at
23 = 8 points can be determined, according to Eqs. (7.25–7.26) as

The values of probability masses also are tabulated in the last column of the above table. Therefore, themth order moment
about the origin for the sewer flow capacity can be calculated by Eq. (7.24). The computations of the first two moments about
the origin are shown in the following table in which columns (1)–(3) are extracted from the above table.:

Point Q(ft3 ) p Q× p Q2 Q2 × p

(1) (2) (3) (4) (5) (6)

Sum —— 1.00000 41.219 —— 1715.99

1 42.19 0.03125 1.318 1780.00 55.625

2 40.14 0.03125 1.254 1611.22 50.351

3 37.92 0.21875 8.295 1437.93 314.547

4 36.07 0.21875 7.890 1301.04 284.603

5 46.64 0.21875 10.203 2175.29 475.845

6 44.36 0.21875 9.704 1967.81 430.458

7 41.91 0.03125 1.310 1756.45 54.889

8 39.87 0.03125 1.246 1589.62 49.676

3 2

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
From the above table, μQ = E(Q) = 41.219 ft3 and E(Q2 ) = 1715.994 Then, the variance of the sewer flow capacity can be

estimated as

Hence, the standard deviation of sewer flow capacity is ft3 /s3 . Comparing with the results in Example 7.2,
one observes that Rosenblueth’s PE method yields higher values of the mean and variance for the sewer flow capacity than
those obtained with the FOVE method.

7.2.5. Approximate Technique: Harr’s Probabilistic Point Estimation


(PE) Method
To avoid the computationally intensive nature of Rosenblueth’s PE method when the number of random variables is moderate
or large, Harr (1989) proposed an alternative probabilistic PE method which reduces the required model evaluations from 2N to
2N and greatly enhances the applicability of the PE method for uncertainty analysis of practical problems. The method is a
second moment method which is capable of taking into account of the first two moments (that is, the mean and variance) of the
involved random variables and their correlations. Skew coefficients of the random variables are ignored by the method. Hence,
the method is appropriate for treating normal and other symmetrically distributed random variables. The theoretical basis of
Harr's PE method is built on orthogonal transformations of the correlation matrix.

The orthogonal transformation is an important tool for treating problems with correlated random variables. The main objective
of the transformation is to map correlated random variables from their original space to a new domain in which they become
uncorrelated. Hence, the analysis is greatly simplified.

Consider N multivariate random variables X = (X1 , X2 , …,XN) t having a mean vector μX = (μ1 , μ2 , …,μ N) t and the correlation matrix
R(X)

Note that the correlation matrix is a symmetric matrix, that is, ρij = ρij for i ≠ j.

The orthogonal transformation can be made using the eigenvalue eigenvector decomposition or spectral decomposition by
which R(X) is decomposed as

(7.27)

where V is an N × N eigenvector matrix consisting of N eigenvectors as V = (v1 , v2 , ..., vN) with vi being the ith column
eigenvector and Λ = diag(λ1 , λ2 , …, λN) is a diagonal eigenvalues matrix.

In terms of the eigenvectors and eigenvalues, the random vector in the original parameter space can be expressed as

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(7.28)

in which Y is a vector of N standardized random variables having 0 as the mean vector and the identity matrix, I, as the
covariance matrix, and D is a diagonal matrix of variances of N random variables. The transformed variables, Y, are linear
functions of the original random variables, therefore, if all the original random variables X are normally distributed, then the
standardized transformed random variables, Y, are independent standard normal random variables.

For a multivariate model W = g (X1, X 2, …, XN) involving N random variables, Harr’s method selects the points of evaluation
located at the intersections of N eigenvector axes with the surface of a hypersphere having a radius of in the eigenspace
as

(7.29)

in which xi± represents the vector of coordinates of the N random variables in the parameter space corresponding to the ith
eigenvector vi ; μ = (μ1 , μ2 , ..., μN) t, a vector of means of N random variables X.

Based on the 2N points determined by Eq. (7.29), the function values at each of the 2N points can be computed. Then, the mth
moment of the model output W about the origin can be calculated according to the following equations:

(7.30)

(7.31)

Alternatively, the orthogonal transformation can be made to the covariance matrix.

Example 7.4. Referring to Example 7.2, determine the uncertainty of the sewer flow capacity using Harr’s PE method.

Solution: From the previous example, statistical moments of random parameters in Manning’s formula,

are

From the given correlation relation among the three random variables, the correlation matrix can be established as

The corresponding eigenvector matrix and eigenvalue matrix are, respectively,

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
and

According to Eq. (7.29), the coordinates of the 2 × 3 = 6 intersection points corresponding to the three eigenvectors and the
hypersphere with a radius can be determined as

The resulting coordinates at the six intersection points from the above equation are listed in column (2) of the table given
below. Substituting the values of x in column (2) into Manning’s formula, the corresponding sewer flow capacities are listed in
column (3). Column (4) lists the value of Q2 for computing the second moment about the origin later. After columns (3) and (4)
are obtained, the averaged value of Q and Q2 along each eigenvector are computed and listed in columns (5) and (6),
respectively.

Point x = (n, D, S) Q Q2

(1) (2) (3) (4)


(5) (6)

1+ (0.01592, 2.9265, 0.00500) 36.16 1307.82 41.39 1739.98

1– (0.01408, 3.0735, 0.00500) 46.61 2172.14

2+ (0.01592, 3.0735, 0.00500) 41.22 1699.09

2– (0.01408, 2.9265, 0.00500) 40.89 1671.99 41.05 1685.54

3+ (0.01500, 3.00, 0.00543) 42.74 1826.45

3– (0.01500, 3.00, 0.00457) 39.20 1537.17 40.97 1681.81

The mean of the sewer flow capacity can be calculated, according to Eq. (7.31), with m = 1, as

The second moment about the origin is calculated as:

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
The variance of the sewer flow capacity then can be calculated as

Hence, the standard deviation of sewer flow capacity is . Comparing with the results in Examples 7.2 and
7.3, one observes that the mean and variance of the sewer flow capacity computed with Harr's PE method lie between those
computed with the FOVE method and Rosenblueth’s method.

7.3. RELIABILITY ANALYSIS METHODS


In a multitude of hydraulic engineering problems uncertainties in data and in theory, including design and analysis procedures,
warrant a probabilistic treatment of the problems. The risk associated with the potential failure of a hydraulic engineering
system is the result of the combined effects of inherent randomness of external loads and various uncertainties involved in the
analysis, design, construction, and operational procedures. Hence, to evaluate the probability that a hydraulic engineering
system would function as designed requires performing uncertainty and reliability analyses.

As discussed in Sec. 7.1.2, the reliability, ps, is defined as the probability of safety (or non–failure) in which the resistance of the
structure exceeds the load, that is,

Conversely, the failure probability, pf, can be computed as

The above definitions of reliability and failure probability are equally applicable to component reliability as well as total system
reliability. In hydraulic design, the resistance and load are frequently functions of a number of random variables, that is, L = g(XL)
= g(X1 , X2 , …, Xm ) and R = h(XR) = h(Xm+ 1, Xm+ 2, … , Xn ) where X1 , X2 ,…, Xn are random variables defining the load function, g(XL),
and the resistance function, h(XR). Accordingly, the reliability is a function of random variables

(7.32)

As discussed in the preceding sections, the natural hydrologic randomness of flow and precipitation are important parts of the
uncertainty in the design of hydraulic structures. However, other uncertainties also may be significant and should not be
ignored.

7.3.1. Performance Functions and Reliability Index


In the reliability analysis, Eq. (7.32) can alternatively be written, in terms of a performance function, W(X) = W(XL, XR), as

(7.33)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
in which X is the vector of basic random variables in the load and resistance functions. In reliability analysis, the system state is
divided into safe (satisfactory) set defined by W(X) > 0 and failure (unsatisfactory) set defined by W(X) < 0 (Fig. 7.2). The
boundary that separates the safe set and failure set is the failure surface, defined by the function W(X) = 0, called the limit state
function. Since the performance function W(X) defines the condition of the system, it is sometimes called the system state
function.

The performance function W(X) can be expressed differently as

(7.34)

(7.35)

(7.36)

Referring to Sec. 7.1.2, Eq. (7.34) is identical to the notion of safety margin, whereas Eqs. (7.35) and (7.36) are based on safety
factor representations.

Also in the reliability analysis, a frequently used reliability indicator, β, is called thereliability index. The reliability index was first
introduced by Cornell (1969) and later formalized by Ang and Cornell (1974). It is defined as the ratio of the mean to the
standard deviation of the performance function W(X)

(7.37)

in which μW and σW are the mean and standard deviation of the performance function, respectively. FromEq. (7.37), assuming
an appropriate PDF for the random performance function W(X), the reliability then can be computed as

(7.38)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 7.2 System states defined by performance function

in which FW (·) is the cumulative distribution function (CDF) of the performance function W and W' is the standardized
performance function defined as W' = (W–μW )/σW . The expression of reliability, ps, for some distributions of W(X) are given in
Table 7.6. For practically all probability distributions used in the reliability analysis, the value of the reliability, p,
s is a strictly
increasing function of the reliability index, β. In practice, the normal distribution is commonly used for W(X) in which case the
reliability can be simply computed as

(7.39)

where Φ(·) is the standard normal CDF whose values can be found in various probability and statistics books.

7.3.2. Direct Integration Method


From Eq. (7.1), the reliability can be computed in terms of the joint PDF of the load and resistance as

(7.40)

in which fR,L(r, ) is the joint PDF of random load, L, and resistance, R; r and are dummy arguments for the resistance and
load, respectively; and (r1 , r2 ) and are the lower and upper bounds for the resistance and load, respectively. This
computation of reliability is commonly referred to as the load–resistance interference.

When load and resistance are statistically independent, Eq. (7.40) reduces to

(7.41)

or

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(7.42)

in which FL(·) and FR(·) are the marginal CDFs of random load L and resistance R, respectively; and ER[FL(R)] is the expectation of
the CDF of random load over the feasible range of the resistance. A schematic diagram illustrating load–resistance interference
in the reliability computation, when the load and resistance are independent random variables, is shown in Fig. 7.3.

In the case that the PDF of the performance function W is known or derived, the reliability can be computed, according toEq.
(7.33), as

(7.43)

in which fW (w) is the PDF of the performance function.

In the conventional reliability analysis of hydraulic engineering design, uncertainty from the hydraulic aspect often is ignored.
Treating the resistance or capacity of the hydraulic structure as a constant reduces Eq. (7.40) to

(7.44)

Table 7.6 Reliability Formulas for Selected Distribution (After Yen et al., 1986)

Source: From Yen et al., 1986.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 7.3 Schematic diagram of of load-resistance interference

in which ro is the resistance of the hydraulic structure, a deterministic constant. If the PDF of the hydrologic load is the annual
event, such as the annual maximum flood, the resulting annual reliability can be used to calculate the corresponding design
return period.

The method of direct integration requires the PDFs of the load and resistance or the performance function be known or derived.
This is seldom the case in practice, especially for the joint PDF, due to the complexity of hydrologic and hydraulic models used
in the design and of the natural system being approximated with these models. Explicit solution of direct integration can be
obtained for only a few PDFs as shown in Table 7.6 for the reliability ps. For most other PDFs the use of numerical integration is
unavoidable. For example, the distribution of the safety margin W expressed by Eq. (7.34) has a normal distribution if both load
and resistance functions are linear and all random variables are normally distributed. In terms of safety factor expressed as
Eqs. (7.35) and (7.36), the distribution of W(X) is lognormal if both load and resistance functions have multiplicative forms
involving lognormal random variables.

Example 7.5 Referring to Example 7.2, the random variables n, D, and S used in Manning's formula to compute the sewer
capacity are independent lognormal random variables with the following statistical properties:

Parameter Mean Coeff. of Variation

n (ft1/6) 0.015 0.05

D (ft) 3.0 0.02

S (ft/ft) 0.005 0.05

Compute the reliability that the sewer can convey the inflow discharge of 35 ft3 /s.

Solution: In this example, the resistance function is R(n, D, S) = 0.463 n–1 D2.67 S 0.5 and the load is L = 35 ft3 /s. Since all three
stochastic parameters are lognormal random variables, the performance function appropriate for use is

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
The reliability ps = P[W(n,D,S) ≥ 0] can then be computed as the following.

Since n, D, and S are independent lognormal random variables, hence, ln(n), ln(D), and ln(S) are independent normal random
variables. The performance function W(n, D, S) is a linear function of normal random variables, then, by the reproductive
property of normal random variables, W(n, D, S) also is a normal random variable with the mean

and variance

The means and variances of log–transformed variables can be obtained as

Then, the mean and variance of the performance function, W(n, D, S), can be computed as

The reliability can be obtained as

7.3.3. Mean-Value First-Order Second-Moment (MFOSM) Method


In the first–order second–moment methods the performance function W(X), defined on the basis of the load and resistance
functions, g(XL) and h(XR), are expanded in a Taylor series at a selected reference point. The second and higher order terms in
the series expansion are truncated, resulting in an approximation that requires the first two statistical moments of the random
variables. This simplification greatly enhances the practicality of the first order methods because in many problems it is rather
difficult, if not impossible, to find the PDF of the variables while it is relatively simple to estimate the first two statistical
moments. Detailed procedures of the first–order second–moment method are given in Sec. 7.2.3 which describes the FOVE
method for uncertainty analysis.

The MFOSM method for the reliability analysis first applies the FOVE method to estimate the statistical moments of the
performance function W(X). This is done by applying the expectation and variance operators to the first–order Taylor series
approximation of the performance function W(X), expanded at the mean values of the random variables. Once the mean and
standard deviation of W(X) are estimated, the reliability is computed according to Eqs. (7.38) or (7.39) with the reliability index
βMFOSM computed as

(7.45)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
where μ and C(X) are the vector of means and covariance matrix of the random variables X, respectively; is the
column vector of sensitivity coefficients with each element representing evaluated at X = μ.

Example 7.6 Referring to Example 7.5, compute the reliability that the sewer capacity could convey an inflow peak discharge of
35 ft 3 /s. Assume that stochastic model parameters n, D, and S are uncorrelated.

Solution: The performance function for the problem is W = Q − 35. From Example 7.2, the mean and standard deviation of the
sewer capacity are 40.96 ft 3 /s and 3.17 ft3 /s, respectively. Therefore, the mean and standard deviation of the performance
function W are, respectively,

The MFOSM reliability index is β MFOSM = 5.96/3.17 = 1.880. Assuming a normal distribution for Q, the reliability that the sewer
capacity can accommodate a discharge of 35 ft3 /s is

The corresponding failure probability is pf = Φ (–1.880) = 0.0301.

Ang (1973), Cheng et al. (1986), and Yeng and Ang (1971), indicated that,if the calculated reliability or failure probability is in the
extreme tail of a distribution, the shape of the tails of a distribution becomes very critical. In such cases, accurate assessment
of the distribution of W(X) should be used to evaluate the reliability or failure probability.

7.3.4. Advanced First-Order Second-Moment (AFOSM) Method


The main thrust of the AFOSM method is to mitigate the deficiencies associated with the MFOSM method, while keeping the
simplicity of the first–order approximation. The difference between the AFOSM and MFOSM methods is that the expansion
point in the first–order Taylor series expansion in the AFOSM method is located on the failure surface defined by the limit state
equation, W(x) = 0.

In cases where several random variables are involved in a performance function, the number of possible combinations of such
variables satisfying W(x) = 0 is infinite. From the design viewpoint, one is more concerned with the combination of random
variables that would yield the lowest reliability or highest failure probability. The point on the failure surface associated with the
lowest reliability is the one having the shortest distance in the standardized space to the point where the means of the random
variables are located. This point is called the design point (Hasofer and Lind, 1974) or the most probable failure point
(Shinozuka, 1983).

In the uncorrelated standardized parameter space, the design point in x'–space is the one that has the shortest distance from
the failure surface W(x') = 0 to the origin x' = 0. Such a point can be found by solving

(7.46a)

(7.46b)

in which |x'| represents the length of the vector x'. Utilizing the Lagrangian multiplier method, the design point can be determined
as

(7.47)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
in which is a unit vector eminating from the design point x'* and pointing toward the origin (Fig. 7.4).
The elements of α* are called the directional derivatives representing the value of the cosine angle between the gradient vector
and axes of the standardized variables. Geometrically, Eq. (7.47) shows that the vector is perpendicular to the
tangent hyperplane passing through the design point. Recall that xi = for i = 1, 2, …, N. By the chain–rule in calculus, the
shortest distance, in terms of the original variables x, can be expressed as

(7.48)

Figure 7.4 Characteristics of design point in standardized space

in which x* = (x1*, x2*, …, xN*) t is the point in the original parameter x–space, which can be easily determined from the design
point in x'–space as It is shown in the next subsection that the shortest distance from the origin to the
design point, , in fact, is the reliability index based on the first–order Taylor series expansion of the performance function
W(X) with the expansion point at x*.

7.3.4.1. First-order approximation of performance function at design point.


Referring to Eq. (7.12), the first–order approximation of the performance function, W(X), taking the expansion point xo = x*, is

(7.49)

in which s * = (s1*, s 2*, … ,sN*) t is a vector of sensitivity coefficients of the performance function W(X) evaluated at the expansion
point x*, that is,

W(x*) is not on the right–hand–side of Eq. (7.49) because W(x*) = 0. Hence, at the expansion point x*, the expected value and
the variance of the performance function W(X) can be approximated as

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(7.50)

(7.51)

in which μ and C(X) are the mean vector and covariance matrix of the random variables, respectively. If the random variables
are uncorrelated, Eq. (7.51) reduces to

(7.52)

in which σi is the standard deviation of the ith random variable.

When the random variables are uncorrelated, the standard deviation of the performance functionW(X) can alternatively be
expressed in terms of the directional derivatives as

(7.53)

where αi* is the directional derivative for the ith random variable at the expansion point x*

(7.54a)

or in matrix form

(7.54b)

which is identical to the definition of α* in Eq. (7.47). With the mean and standard deviation of the performance function W(X)
computed at x*, the AFOSM reliability index βAFOSM , given in Eq. (7.37), can be determined as

(7.55)

Equation (7.55) is identical to Eq. (7.48), indicating that the AFOSM reliability index BAFOSM is identical to the shortest distance
from the origin to the design point in the standardized parameter space. This reliability index β AFOSM is also called the Hasofer–
Lind reliability index.

Once the value of βAFOSM is computed, the reliability can be estimated by Eq. (7.39) as ps = Ф(βAFOSM ). Since the
sensitivity of βAFOSM with respect to the uncorrelated, standardized random variables is

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(7.56)

Equation (7.56) shows that −αi* is the rate of change in βAFOSM due to a 1 standard deviation change in random variable Xi at X
= x *. Therefore, the relation between β and can be expressed as

(7.57)

It also can be shown that the sensitivity of reliability or failure probability with respect to each random variable can be computed
as

(7.58a)

in which is the standard normal PDF or in matrix form as

(7.58b)

These sensitivity coefficients reveal the relative importance of each random variable on reliability or failure probability.

7.3.4.2. Algorithms of AFOSM for independent normal parameters.


In the case that X are independent normal random variables, standardization of X reduces them to independent standard
normal random variables Z' with mean 0 and covariance matrix I with I being an N × N identity matrix. Hasofer and Lind (1974)
proposed the following recursive equation for determining the design point

(7.59)

in which subscripts (r) and (r + 1) represent the iteration numbers; –α denotes the unit gradient vector of the failure surface
pointing to the failure region. It would be more covenient to rewrite the above recursive equation in the original x–space as

(7.60)

Based on Eq. (7.60), the Hasofer–Lind algorithm for the AFOSM reliability analysis for problems involving uncorrelated, normal
random variables can be outlined as follows.

Step 1. Select an initial trial solution x(r).

Step 2. Compute W(x(r)) and the corresponding sensitivity coefficient vector s(r).

Step 3. Revise solution point x(r+1) according to Eq. (7.60).

Step 4. Check if x(r) and x(r+1) are sufficiently close. If yes, compute the reliability index

βAFOSM according to Eq. (7.55) and the corresponding reliability ps = Φ(βAFOSM ),

then, go to Step 5; otherwise, update the solution point by lettingx(r) = x(r+1) and

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
return to Step 2.

Step 5. Compute the sensitivity of reliability index and reliability with respect to changes

in random variables according to Eqs. (7.56), (7.57), and (7.58).

Because of the nature of nonlinear optimization, the above algorithm does not necessarily converge to the true design point
associated with the minimum reliability index. Therefore, Madsen et al. (1986) suggest that different initial trial points are used
and the smallest reliability index is chosen to compute the reliability.

Sometimes, it is possible that a system might have several design points. Such a condition could be due to the use of multiple
performance functions or the performance function is highly irregular in shape. In the case that there are J such design points,
the reliability of the system requires that, at all design points, the system performs satisfactorily. Assuming independence of the
occurrence of individual design point, the reliability of the system is the survival of the system at all design points which can be
calculated as

(7.61)

Example 7.7 (uncorrelated, normal). Refer to the data as shown below for the storm sewer reliability analysis in previous
examples.

Parameter Mean Coefficient of Variation

n (ft1/6) 0.015 0.05

D (ft) 3.0 0.02

S (ft/ft) 0.005 0.05

Assume that all three random variables are independent, normal random variables. Compute the reliability that the sewer can
convey an inflow discharge of 35 ft3 /s by the Hasofer–Lind algorithm.

Solution: The initial solution is taken to be the means of the three random variables, namely, x(1) = µ = (µn, µD, µS) t = (0.015, 3.0,
0.005)t. The covariance matrix for the three random variables are

Because the origin (n, D, S) = (0, 0, 0) is located in the failure region compared to the target of 35 ft3 /s as opposed to the safe
region used in the mathematical derivations, a negative sign must be applied to the performance function QC − QL as

At x(1), the value of the performance function W(n, D, S) = −6.010 which is not equal to zero. This implies that the solution point
x(1) does not lie on the limit state surface. By Eq. (7.64) the new solution, x(2), can be obtained as x(2) = (0.01592, 2.921,
0.004847). Then, one checks the difference between the two consecutive solution points as

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
which is considered large and, therefore, the iteration continues. The following table lists the solution point,x(r), its
corresponding sensitivity vector, s (r), and the vector of directional derivatives, α(r), in each iteration. The iteration stops when the
difference between the two consecutive solutions is less than 0.001 and the value of the performance function is less than
0.001.

After four iterations, the solution converges to the design pointx* = (n*, D*, S *) = (0.01594, 2.912, 0.004827). At the design point
x*, the mean and standard deviation of the performance function W can be estimated, by Eqs. (7.50) and (7.53), respectively, as

The reliability index can be computed as β* = μW* / σW* = 2.057 and the corresponding reliability and failure probability can be
computed, respectively, as

Finally, at the design point x*, the sensitivity of the reliability index and it reliability with respect to each of the three random
variables can be computed by Eqs. (7.57) and (7.58). The results are shown in columns (4)–(7) of the table below.

Variable x α* ∂β/∂xi' ∂ps/∂xi' ∂β/∂xi ∂ps/∂xi x∂β/∂βxi xi∂ps/ps∂xi

(1) (2) (3) (4) (5) (6) (7) (8) (9)

n 0.01594 0.6119 –0.6119 –0.02942 –815.8 –39.22 –6.323 –0.638

D 2.912 –0.7157 0.7157 0.03441 11.9 0.57 16.890 1.703

S 0.00483 –0.3369 0.3369 0.01619 1347.0 64.78 3.161 0.319

From the above table, the quantities ∂β/∂x' and ∂ps/∂x' show the sensitivity of the reliability index and the reliability for a one–
standard–deviation change in the random variables whereas ∂β/∂x and ∂ps/∂x correspond to a one unit change of random
variables in the original space. The sensitivity of β and ps associated with Manning’s roughness is negative whereas those for
pipe size and slope are positive. This indicates that an increase in Manning’s roughness would result in a decrease in β and ps,

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
whereas an increase in slope and/or pipe size would increase β and ps.The indication is physically plausible because an
increase in Manning’s roughness would decrease the flow carrying capacity of the sewer whereas, on the other hand, an
increase in pipe diameter and/or pipe slope would increase the flow carrying capacity of the sewer.

Furthermore, one can judge the relative importance of each random variable based on the absolute values of sensitivity
coefficients. It is generally difficult to draw meaningful conclusion based on the relative magnitude of ∂β/∂x and βps/∂x
because units of different random variables are not the same. Therefore, sensitivity measures not affected by the dimension of
the random variables such as ∂β/∂x' and ∂ps/∂x' are generally more useful. With regard to change in β orps per one standard
deviation change in each variable X, for example, pipe diameter is significantly more important than the pipe slope.

An alternative sensitivity measure, called the relative sensitivity, is defined as

(7.62)

in which s i% is a dimensionless quantity measuring the percentage change in the dependent variabley due to 1 percent change
in the variable xi. The last two columns of the table given above show the percentage change in β andps due to 1 percent
change in Manning’s roughness, pipe diameter, and pipe slope. As can be observed, the pipe diameter is the most important
random variable in Manning’s formula affecting the reliability of the flow carrying capacity of the sewer.

7.3.4.3. Treatment of correlated normal random variables.


When some of the random variables involved in the performance function are correlated, transformation of correlated variables
to uncorrelated ones is made. This can be achieved through the orthogonal transformation such as the spectral decomposition
described above.

Consider that the original random variables are multivariate normal random variables. The original random variables X can be
transformed to uncorrelated, standardized normal variables Z' as

(7.63)

in which Λx and V x are, respectively, the eigenvalue matrix and eigenvector matrix corresponding to the correlation matrix R(X).

In the transformed domain as defined by Z', the directional derivatives of the performance function in z'–space, αz', can be
computed, according to Eq. (7.47), as

(7.64)

in which the vector of sensitivity coefficients in Z'–space, can be obtained from W(x) through the chain–rule of
calculus as

(7.65)

in which s x is the vector of sensitivity coefficients of the performance function with respect to the original random variablesX.

After the design point is found, one also is interested in the sensitivity of the reliability index and failure probability with respect
to changes in the involved random variables. In the uncorrelated, standardized normal Z'–space, the sensitivity of β and ps with
respect to Z' can be computed by Eqs. (7.57) and (7.58) with X' replaced by Z'. The sensitivity of β with respect to X in the
original parameter space then can be obtained as

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(7.66)

from which the sensitivity for ps can be computed by Eq. (7.58b). The procedure for Hasofer–Lind's approach to handle the
case of correlated normal variables is given below.

Step 1. Select an initial trial solution x(r).

Step 2. Compute W(x(r)) and the corresponding sensitivity coefficient vector s x,(r).

Step 3. Revise solution point x(r+1) according to

(7.67)

Step 4: Check if x(r) and x(r+1) are sufficiently close. If yes, compute the reliability index

β(r) according to

(7.68)

and the corresponding reliability ps = Φ(βAFOSM ), then, go to Step 5; otherwise,

update the solution point by lettingx(r) = x(r+1) and return to Step 2.

Step 5. Compute the sensitivity of reliability index and reliability with respect to changes

in random variables at the design point x*.

Example 7.8 (Correlated, normal). Refer to the data in Example 7.7 for the storm sewer reliability analysis problem. Assume that
Manning’s roughness (n) and pipe diameter (D) are dependent normal random variables having a correlation coefficient of
−0.75. Furthermore, the pipe slope (S) also is a normal random variable but is independent of Manning’s roughness and pipe
size. Compute the reliability that the sewer can convey an inflow discharge of 35 ft3 /s by the Hasofer–Lind algorithm.

Solution. The initial solution is taken as the means of the three random variables, namely, x(1) = µ = (µn, µD, µS) t = (0.015, 3.0,
0.005)t. Since the random variables are correlated normal random variables with a correlation matrix as

by the spectral decomposition, the eigenvalue matrix associated with the correlation matrixR (X) is = diag (1.75, 0.25, 1.00)
and the corresponding eigenvector matrix V x is

At x(1) = ( 0.015, 3.0, 0.005)t, the sensitivity vector of the performance function

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
is

and the value of the performance function W(n, D, S) = −6.010, which is not equal to zero. This indicates that the solution point
x(1) does not lie on the limit state surface. Applying Eq. (7.67), the new solution, x(2), can be obtained as x(1) = (0.01569, 2.900,
0.004885). Then, checking the difference between the two consecutive solutions as

This is considered to be large and therefore the iteration continues. The following table lists the solution point, x(r), its
corresponding sensitivity vector, s x,(r), and the vector of directional derivatives, αz',(r), in each iteration. The iteration stops when
the Euclidean distance between the two consecutive solution points is less than 0.001 and the value of the performance
function is less than 0.001.

After four iterations, the solution converges to the design pointx* = (n*, D*, S *) t = (0.01571, 2.891, 0.004872)t. At the design point
x*, the mean and standard deviation of the performance function W can be estimated, by Eqs. (7.50) and (7.53), respectively, as

The reliability index then can be computed as β* = μW*/σW* = 1.991 and the corresponding reliability and failure probability can
be computed, respectively, as

Finally, at the design point x*, the sensitivity of the reliability index and reliability with respect to each of the three random
variables can be computed by Eqs. (7.57), (7.58), (7.66), and (7.62). The results are shown in the following table:

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Variable xi α*i ∂β/∂zi ∂ps/∂zi ∂β/∂xi ∂ps/∂xi xi∂β/β∂xi ∂psxi/ps∂xi

(1) (2) (3) (4) (5) (6) (7) (8) (9)

n 0.01571 0.0124 –0.0124 –0.00068 –17.72 –0.9746 –0.1399 –0.01568

D 2.891 –0.9999 0.9999 0.05500 0.26 0.0142 0.3734 0.04186

S 0.00487 –0.0071 0.0071 0.00039 28.57 1.5720 0.0699 0.00784

The sensitivity analysis indicates similar information about the relative importance of the random variables as inExample 7.7.

7.3.4.4. Treatment of non-normal random variables.


When non–normal random variables are involved, it is advisable to transform them into equivalent normal variables. Rackwitz
(1976) and Rackwitz and Fiessler (1978) proposed an approach which transforms a non–normal distribution into an equivalent
normal distribution so that the value of the CDF of the transformed equivalent normal distribution is the same as that of the
original non–normal distribution at the design point x*. Later, Ditlvesen (1981) provided the theoretical proof of the convergence
property of the normal transformation in the reliability algorithms searching for the design point. Table 7.7 presents the normal
equivalent for some commonly used non–normal distributions in the reliability analysis.

By the Rackwitz (1976) approach, the normal transform at the design pointx* satisfies the following condition:

(7.69)

in which Fi (xi*) is the CDF of the random variable Xi having a value at xi*; μi*N, and σi*N are the mean and standard deviation of the
normal equivalent for the ith random variable at Xi= xi* ; respectively and zi* = Φ–1 [Fi (xi* )] is the standard normal quantile.
Equation (7.69) indicates that the cumulative probability of both the original and normal transformed spaces must be
preserved. From Eq. (7.69), the following equation is obtained:

(7.70)

Note that μi*N and σi*N are functions of the expansion point x*. To obtain the normal equivalent standard deviation, one can take
the derivative of both sides of Eq. (7.69) with respect to xi resulting in

in which fi(·) and are the PDFs of the random variable Xi and the standard normal variable Z i , respectively. From the above
equation, the normal equivalent standard deviation σ i*N can be computed as

(7.71)

Therefore, according to Eqs. (7.70) and (7.71), the mean and standard deviation of the normal equivalent of the random
variable Xi at any expansion point x* can be calculated.

It should be noted that the above normal transformation utilizes only the marginal distributions of the stochastic variables
without considering their correlations. Therefore, it is, in theory, suitable for problems involving independent non–normal

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
random variables. When random variables are nonnormal and correlated, additional considerations must be given in the normal
transformation and they are described in the next subsection.

To incorporate the normal transformation for non–normal, uncorrelated random variables, the iterative algorithms described
previously for the AFOSM reliability method can be modified as follows.

Step 1: Select an initial trial solution x(r).

Step 2: Compute W(x(r)) and the corresponding sensitivity coefficient vector s x,(r).

Step 3: Revise solution point x(r+1) according to Eq. (7.60) with the means and standard

deviations of non–normal random variables replaced by their normal equivalents,

that is,

(7.72)

in which the subscript stands for statistical properties in equivalent normal

space.

Step 4: Check if x(r) and x(r+ 1) are sufficiently close. If yes, compute the reliability index

βAFOSM according to Eq. (7.55) and the corresponding reliability ps = Φ(βAFOSM ),

then, go to Step 5; otherwise, update the solution point by lettingx(r) = x(r+ 1)

and return to Step 2.

Step 5: Compute the sensitivity of reliability index and reliability with respect to

changes in random variables according to Eqs. (7.57) and (7.58) with D replaced

by at the design point x*.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Table 7.7 Normal Equivalent for Some Commonly UIsed Non–Normal Disatributions.

7.3.4.5. AFOSM reliability analysis for non-normal, correlated random variables.


For most practical engineering problems, parameters involved in load and resistance functions are correlated, non–normal
random variables. Such distributional information has important implications on the results of reliability computation, especially
on the tail of the probability distribution of the system performance function. The procedures of the Rackwitz normal
transformation and orthogonal decomposition described previously can be integrated in the AFOSM reliability analysis.

For correlated non normal variables, Der Kiureghian and Liu (1985) and Liu and Der Kiureghian (1986) developed a normal
transformation that preserve the marginal probability contents and the correlation structure of multivariate non–normal random
variables. More specifically, their approach considers that each non–normal random variable can be transformed to the
corresponding standard normal variable as

(7.73)

Furthermore, the correlation between a pair of non–normal random variables is preserved in the standard normal space by
Nataf's bivariate distribution model as

(7.74)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
in which ρij and are respectively, the correlation coefficient of random variables Xi and Xj in the original and normal
transformed space; and

For a pair of non–normal random variables, Xi and Xj, with known marginal PDFs and correlation coefficient, ρij, Eq. (7.74) can
be applied to solve for To avoid the required computation for solving in Eq. (7.74), Der Kiureghian and Liu (1985)
developed a set of semi–empirical formulas as

(7.75)

in which T ij is a transformation factor depending on the marginal distributions and correlation of the two random variables
under consideration. In the case that the pair of random variables considered are both normal, the transformation factor, T ij, has
a value of 1. Given the marginal distributions and correlation for a pair of random variables, the formulas of Der Kiureghian and
Liu (1985) compute the corresponding transformation factor, T ij, to obtain the equivalent correlation as if the two random
variables were bivariate normal random variables. After all pairs of random variables are treated, the correlation matrix in the
correlated normal space, R(Z), is obtained.

Ten different marginal distributions commonly used in reliability computations were considered by Der Kiureghian and Liu
(1985) and are tabulated in Table 7.8. For each combination of two distributions, there is a corresponding formula. Therefore, a
total of 54 formulas for 10 different distributions were developed which are divided into five categories as shown in Fig. 7.5.
The complete forms of these formulas are given in Table 7.9. Due to the semiempirical nature of the equations in Table 7.9,
there is a slight possibility that the resulting may violate its valid range when the original ρij is close to –1 or 1. An algorithm
for the AFOSM reliability analysis based on the transformation of Der Kiureghian and Liu for problems involving multivariate
non–normal random variables can be found in Tung (1996).

The normal transformation of Der Kiureghian and Liu (1985) preserves only the marginal distributions and the second–order
correlation structure of the correlated random variables which are partial statistical features of the complete information
repesentable by the joint distribution function. Regardless of its approximate nature, the normal transformation of Der
Kiureghian and Liu, in most practical engineering problems, represents the best approach to treat the available statistical
information about the correlated random variables. This is because, in reality, the choices of multivariate distribution functions
for the correlated random variables are few as compared to the univariate distribution functions. Furthermore, the derivation of
a reasonable joint probability distribution for a mixture of correlated non–normal random variables is difficult, if not impossible.

7.3.5. Monte Carlo Simulation Methods


Monte Carlo simulation is the general purpose method to estimate the statistical properties of a random variable that is related
to a number of random variables which may or may not be correlated. In Monte Carlo simulation, the values of stochastic
parameters are generated according to their distributional properties. The generated parameter values are used to compute the
value of performance function. After a large number of simulated realizations of performance function are generated, the
reliability of the structure can be estimated by computing the ratio of the number of realizations with W ≥ 0 to the total number
of simulated realizations.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 7.5 Categories of the Normal Transformation Factor, Tij (From Kiureghian and Liu, 1985)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Table 7.8 Definitions of Distributions Used in Fig. 7.5 and Table 7.9

Table 7.9 Semi–empirical Normal Transformation Formulas

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
The major disadvantage of Monte Carlo simulation is its computational intensiveness. The number of sample realizations
required in simulation to accurately estimate the risk depends on the magnitude of the unknown risk itself. In general, as the
failure probability value gets smaller, the required number of simulated realizations increases. Therefore, some variations of
Monte Carlo simulation to accurately estimate the failure probability while reducing excessive computation time have been

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
developed. They include stratified sampling and Latin hypercubic sampling (McKay et al., 1979), importance sampling (Harbitz,
1983; Schueller and Stix, 1986), and the reduced space approach (Karamchandani, 1987).

7.4. RISK-BASED DESIGN OF HYDRAULIC STRUCTURES


Reliability analysis methods can be applied to design hydraulic structures with or without considering risk costs. Risk costs are
the cost items incurred due to the unexpected failure of the structures and they can be broadly classified into tangible and
intangible costs. Tangible costs are those measurable in terms of monetary units which include damage to properties and
structures, loss in business, cost of repair, and so forth. On the other hand, intangible costs are not measurable by monetary
units such as psychological trauma, loss of lives, social unrest, and others.

Risk-based design of hydraulic structures integrates the procedures of uncertainty and reliability analyses in the design practice.
The risk-based design procedure considers trade offs among various factors such as failure probability, economics, and other
performance measures in hydraulic structure design. Plate and Duckstein (1987, 1988) list a number of performance measures,
called the figures of merit in the risk-based design of hydraulic structures and water resource systems, which are further
discussed by Plate (1992). When the risk-based design is embedded into an optimization framework, the combined procedure
is called the optimal risk-based design.

7.4.1. Basic Concept


The basic concept of risk-based design is shown schematically in Fig. 7.6. The risk function accounting for the uncertainties of
various factors can be obtained using the reliability computation procedures described in previous sections. Alternatively, the
risk function can account for the potential undesirable consequences associated with the failure of hydraulic structures. For the
sake of simplicity, only the tangible damage cost is considered here.

Because risk costs associated with the failure of a hydraulic structure cannot be precisely predicted from year to year. A
practical way is to quantify these costs using an expected value on an annual basis. The total annual expected cost (TAEC) is
the sum of the annual installation cost, operation and maintenance costs, and annual expected damage cost which can be
expressed as

(7.76)

where FC is the first or total installation cost which is a function of decision vector that may include the size and configuration
of the hydraulic structure; E(D|Θ) is the annual expected damage cost associated with the structural failure; and CRF is the
capital recovery factor, which brings the present worth of the installation costs to an annual basis. The CRF can be computed as
(See Section 1.6)

(7.77)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 7.6 Schematic sketch of risk-based design

with n and i being the expected service life of the structure and the interest rate, respectively.

Frequently in practice, the optimal risk-based design determines the optimal structural size, configuration, and operation such
that the annual total expected cost is minimum. Referring to Fig. 7.6, as the structural size increases, the annual installation
cost increases whereas the annual expected damage cost associated with the failure decreases. The optimal risk-based design
procedure attempts to determine the minimum point on the total annual expected cost curve. Mathematically, the optimal risk-
based design problem can be stated as:

(7.78a)

(7.78b)

where m are constraints representing the design specifications that must be satisfied.

In general, the solution to Eqs. (7.78a–b) could be acquired through the use of appropriate optimization algorithms. The
selection or development of the solution algorithm is largely problem specific, depending on the characteristics of the problem
to be optimized.

7.4.2. Historical Development of Hydraulic Design Methods


The evolution of hydraulic design methods can be roughly classified into three stages: (1) return period design, (2) conventional
risk-based design, and (3) optimal risk-based design with consideration given to various uncertainties.

7.4.2.1. Return-period design.


Using the return period design approach, a water resource engineer first determines the design discharge from a frequency
discharge relation by selecting an appropriate design frequency or return period. The design discharge then is used to
determine the structure size and layout that has a satisfactory hydraulic performance. By the return period design method, the

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
selection of the design return period is crucial to the hydraulic design. Once the design return period is determined, it remains
fixed throughout the design process. In the past, the design return period was subjectively selected on the basis of an
individuals experience, perceived importance of the structure, and/or legal requirements. The selection of the design return
period is a complex procedure which involves consideration of economic, social, legal, and other factors. However, the
procedure does not account for these factors explicitly.

7.4.2.2. Conventional risk-based design.


The conventional risk-based design considers the inherent hydrologic uncertainty in the calculation of the expected economic
losses. In the risk-based design procedure, the design return period is a decision variable instead of being a pre–selected
design parameter value as in the return period design procedure.

The concept of risk-based design has been recognized for many years. As early as in 1936, the U.S. Congress passed the Flood
Control Act (U. S. Statutes 1570) in which consideration of failure consequences in the design procedure was advocated. The
economic risks or the expected flood losses were not explicitly considered until the early 1960s. Pritchett’s (1964) work was
one of the early attempts to apply the risk-based hydraulic design concept to highway culverts. At four actual locations,
Pritchett calculated the investment costs and the expected flood damage costs on an annual basis for several design
alternatives among which the most economical was selected. The results indicated that a more economical solution could be
reached by selecting smaller culvert sizes compared with the traditional return period method used by the California Division of
Highways. The conventional approach has been applied to the design of various hydraulic structures.

7.4.2.3. Risk-based design considering other uncertainties.


In the conventional risk–based hydraulic design procedure, economic risks are calculated considering only the randomness of
hydrologic events. In reality, there are various types of uncertainties in a hydraulic structure design. Advances were made to
incorporate other aspects of uncertainty in various hydraulic structure design.

7.4.3. Tangible Costs in Risk-Based Design of Hydraulic Structures


Design of a hydraulic structure, by nature, is an optimization problem consisting of an analysis of the hydraulic performance of
the structure to convey flow across or through the structure and a determination of the most economical design alternative. The
objective function is to minimize the sum of capital investment cost, the expected flood damage costs, and operation and
maintenance costs. For example, the relevant variables and parameters associated with the investment cost and the expected
damage costs of highway drainage structures are listed in Tables 7.10 and 7.11, respectively. The maintenance cost over the
service life of the structure is generally treated as a yearly constant. Based on Tables 7.10 and 7.11, the information needed for
the risk-based design of a highway drainage structure can be categorized into four types:

1. Hydrologic/physiographical data, including flood and precipitation data, drainage area, channel bottom slope, and drainage
basin slope. These data are needed to predict the magnitude of hydrologic events such as streamflow and rainfall by frequency
analysis and/or regional analysis.

2. Hydraulic data, including flood plain slopes, geometry of the channel crosssection, roughness coefficients, size of structural
opening, and height of embankment. These data are needed to determine the flow carrying capacities of hydraulic structures
and to perform hydraulic analysis.

3. Structural data, including material of substructures and layout of structure.

4. Economic data, including (1) type, location, distribution, and economic value of upstream properties such as crops and
buildings; (2) unit costs of structural materials, equipment, operation of vehicle, accident, occupancy, and labor fee; (3) depth
and duration of overtopping, rate of repair, and rate of accidents; and (4) time of repair and length of detour.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
In the design of hydraulic structures, the installation cost often is dependent on the environmental conditions such as the
location of the structure, geomorphic and geologic conditions, the soil type at the structure site, type and price of construction
material, hydraulic conditions, flow conditions, recovery factor of the capital investment, labor and transportation costs. In
reality, these factors would result in uncertainties in cost functions used in the analysis. The incorporation of the economic
uncertainties in the risk-based design of hydraulic structures can be found elsewhere (U.S. Army Corps of Engineers, 1996).

7.4.4. Evaluations of Annual Expected Flood Damage Cost


In reliability–based and optimal risk-based designs of hydraulic structures, the thrust is to evaluate as the function of
the PDFs of load and resistance, damage function, and the types of uncertainty considered.

7.4.4.1. Conventional approach.


In the conventional risk-based design where only inherent hydrologic uncertainty is considered, the structural size and its
corresponding flow carrying capacity qc, in general, have a one to one, monotonically increasing relation. Consequently, the
design variable alternatively can be expressed in terms of design discharge of the hydraulic structure. Theannual expected
damage cost, in the conventional risk-based hydraulic design, can be computed as

(7.79)

Table 7.10 Variables and Parameters Relevant in Evaluating Capital Investment Cost of Highway Drainage Structures

Pipe Culverts Box Culverts Bridges

Source: From Tung and Bao (1990).

Parameters Unit cost of culvert Unit cost of concrete Unit cost of bridge

Unit cost of steel

Variables Number of pipes Number of barrels Bridge length

Pipe size Length of barrel Bridge width

Pipe length Width of barrel

Pipe materials Quantity of concrete

Quantity of steel

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Table 7.11 Damage categories and related economic variables and site characteristics in risk-based design of highway
drainage structures

Damage Category Economic Variables Site Characteristics

Source: From Tung and Bao (1990).

Floodplain Property Damage:

Losses to crops Type of crops Location of crop fields

Losses to buildings Economic value of crops Location of buildings

Economic values of buildings Physical layout of drainage structures

Roadway geometry

Flood characteristics

Stream cross–section

Slope of channel

Channel & floodplain roughness properties

Damage to Pavement and Embankment:

Pavement damage Material cost of pavement Flood magnitude

Embankment damage Material cost of embankment Flood hydrograph

Equipment costsLabor costs Overtopping duration

Repair rate for pavement & embankment Depth of overtopping

Total area of pavement

Total volume of embankment

Types of drainage structures and layout

Roadway geometry

Traffic Related Losses:

Increased travel cost due to detour Rate of repair Average daily traffic volume

Operational cost of vehicle

Lost time of vehicle occupants Distribution of income for vehicle occupants Composition of vehicle types

Increased risk of accidents on detour Cost of vehicle accident Length of normal detour path

Rate of accident

Increased risk of accidents on a flooded highway Duration of repair Flood hydrograph

Duration and depth of overtopping

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
where is the deterministic flow capacity of a hydraulic structure subject to random flood loadings following a PDF,f(q), and
is the damage function corresponding to the flood magnitude of q and hydraulic structure capacity Due to the
complexity of the damage function and the form of the PDF of floods, the analytical integration of Eq. (7.79), in most practical
applications, is difficult, if not impossible. Hence, it is practical to replace Eq. (7.79) by a numerical approximation.

Equation (7.79) only considers the inherent hydrologic uncertainty due to the random occurrence of flood events, represented
by the PDF, f(q). It does not consider hydraulic and economic uncertainties. Furthermore, a perfect knowledge about the
probability distribution of flood flow is assumed. This generally is not the case in reality.

7.4.4.2. Incorporation of hydraulic uncertainty.


As described in Sec. 7.1.1, uncertainties also exist in the process of hydraulic computations for determining the flow carrying
capacity of the hydraulic structure. In other words, qc is a quantity subject to uncertainty. From the uncertainty analysis of qc,
the statistical properties of qc can be estimated. Hence, to incorporate the uncertainty feature of qc in risk-based design, the
annual expected damage can be calculated as

(7.80)

in which g(qc|Θ) is the PDF of random flow carrying capacity qc. Again, in practical problems, the annual expected damage in Eq.
(7.80) would have to be evaluated through the use of appropriate numerical integration schemes.

7.4.4.3. Extension of conventional approach by considering hydrologic parameter


uncertainty.
Since the occurrence of streamflow is random by nature, the statistical properties such as the mean, standard deviation and
skewness of the distribution calculated from a finite sample also are subject to sampling errors. In hydrologic frequency
analysis, a commonly used frequency equation for determining the magnitude of a hydrologic event of a specified return period
T years is

(7.81)

in which qT is the magnitude of hydrologic event of the return periodT years; μ σ and are the population mean and standard
deviation of the hydrologic event under consideration respectively; and KT is the frequency factor depending on the skew
coefficient and probability distribution of the hydrologic event of interest.

Consider flooding as the hydrologic event that could potentially cause the failure of the hydraulic structure. Due to the
uncertainty associated with μ, σ, and KTR in Eq. (7.81), the flood magnitude of a specified return period, q TR, also is a random
variable associated with its probability distribution (Fig. 7.7) instead of being a single valued quantity represented by its
"average", as commonly done in practice. Sampling distributions for some of the probability distributions frequently used in
hydrologic flood frequency analysis have been presented elsewhere (Chowdhury and Stedinger, 1991; Stedinger, 1983). Hence,
there is an expected damage corresponding to a flood magnitude of the TR–year return period which can be expressed as

(7.82)

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
where is the expected damage corresponding to a T–year flood given a known flow capacity of the hydraulic
structure, qc, h(qT) is the sampling PDF of the flood magnitude estimator of a T year return period; and qT is the dummy variable
for a T year flood. To combine the inherent hydrologic uncertainty, represented by the PDF of annual flood,f(q), and the
hydrologic parameter uncertainty, represented by the sampling PDF for a flood sample of a given return period, h(qT), the annual
expected damage cost can be written as

(7.82)

Figure 7.7 Sampling distribution associated with flood estimator (Tung, 1996)

7.4.4.4. Incorporation of hydrologic inherent/parameter and hydraulic


uncertainties.
To include hydrologic inherent and parameter uncertainties along with the hydraulic uncertainty associated with the flow
carrying capacity, the annual expected damage cost can be written as

(7.83)

Based on the above formulations for computing annual expected damage in the riskbased design of hydraulic structures, one
realizes that the mathematical complexity increases as more uncertainties are considered. However, to obtain an accurate
estimation of annual expected damage associated with the structural failure would require the consideration of all
uncertainties, if such can be practically done. Otherwise, the annual expected damage would, in most cases, be underestimated,
leading to inaccurate optimal design. Tung (1987) numerically shows that, without providing full account of uncertainties in the
analysis, the resulting annual expected damage is significantly under estimated, even with a 75 years long flood record.

7.4.5. U.S. Army Corps of Engineers Risk-Based Analysis for Flood–


Damage Reduction Structures

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
This section briefly summarizes the main features of the U.S. Army Corps of Engineers (USACE, 1996) risk-based analysis
procedure applied to flood damage reduction plans, such as levee construction, channel modification, flood detention, or mixed
measure plan. The procedure explicitly considers the uncertainties in discharge–frequency relation, stage –discharge function,
and stage–damage relation. The performance measures of each flood damage reduction plan include economic indicator, such
as annual expected innudation damage reduction, and non–economic measures, such as expected annual exceedance
probability, long–term risk, and conditional annual nonexceedance probability, and consequence of capacity exceedance. The
long–term failure probability is computed by

(7.84)

where pf(1) = the annual failure probability, and pf(n) = the long–term failure probability over a period of n years.

Uncertainty in discharge frequency relation, as described in Sec. 7.4.4.3, is mainly arised from the sampling error due to the use
of limited amount of flood data in establishing the relation. Statistical procedures for quantifying uncertainty associated with a
discharge frequency relation can be found elsewhere (Interagency Advisory Committee on Water Data, 1982; Stedinger et al.,
1993). For stage–discharge function, its uncertainty may be contributed from factors like measurement errors from
instrumentation or method of flow measurement, bed forms, water temperature, debris or other obstructions, unsteady flow
effects, variation in hydraulic roughness with season, sediment transport, channel scour or depoition, changes in channel
shape during or as a result of flood events, as well as other factors. Uncertainty associated with stage discharge function for
gauged and ungauged reach has been examined by Freeman et al. (1996).

Stage damage relation describes the direct economic loss of flood water innudation for a particular river reach. It is an
important element in risk-based design and analysis of hydraulic structures. The establishment of stage damage relation
requires extensive survey and assessment of economic values of the structures and their contents affected by flood water at
different water stages. Components and sources of uncertainty in establishing a stage damage relation is listed in Table 7.12.
For example, variation of content to structure value ratios of different types of structure in the United Stated is shown in Table
7.13.

In evaluating the performance of different flood damage reduction plans or alternatives within a plan, hydraulic simulations
such as backwater computation or unsteady state flow routing, are required to assess the system response before various
performance measures can be quantified. Due to this compuational complexity and the presece of large number of
uncertainties, the evaluation of various economic and noneconomic performance measures in the risk-based analysis
procedure cannot be done analytically. Therefore, the computation procedure adopted in the USACE risk-based analysis for
flood damage reduction structures is the Monte Carlo simulation. By Monte Carlo simulation, a large number of plausible
discharge–frequency function, stage discharge relation, and stage–damage relation are generated according to the underlying
or assume probability distributions for each of the factors with uncertainty involved. Under each generated scenario, necessary
hydraulic computations are performed based on which various performance measures of different flood damage reduction
plans are calculated. The process is repeated for a large number of possible scenarios and, then, the various performance
measures are averaged for comparing the relative merit of different plans.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Table 7.12 Components and Sources of Uncertainty in Stage–Damage Function

Parameter/Model Source of Uncertainty

Source: From USACE (1996).

Number of structure in Errors in identifying structures; errors in classifying structures


each category

First-floor elevation of Survey errors, inaccuracies in topographic maps; errors in interpolation of contour lines
structure

Depreciated Errors in real estate appraisal; errors in estimation of replacement cost estimation–effective age; errors in estimation
replacement value of of depreciation; errors in estimation of market value
structure

Structure depth-damage Errors in post–flood damage survey; failure to account for other critical factors: flood water velocity, duration of flood,
function sediment load, building material, internal construction, condition, flood warning

Depreciated Errors in content–inventory survey, errors in estimates of ratio of content to structure value
replacement value of
contents

Content depth–damage Errors in post–flood damage survey, failure to account for other critical factors; floodwater velocity, duration of flood,
function sediment load, content location, flood warning

Table 7.13 Content–to–Structure Value Ratios*, †

Structure Category No. of Cases Mean Standard Deviation Minimum Maximum

Source: From USACE (1996). * Note that these are less than ratios commonly used by casualty insurance companies, but those reflect
replacement costs rather than depreciated replacement costs.

† Research by the Institute of Water Resources suggests that errors may be described best with an asymmetric distribution, such as a
log–normal distribution. In that case, the parameters of the error distribution cannot be estimated simply from the values shown in
this table.

One story – no basement 71,629 0.434 0.250 0.100 2.497

One story – basement 8,094 0.435 0.217 0.100 2.457

Two story – no basement 16,056 0.402 0.259 0.100 2.492

Two story – basement 21,753 0.441 0.248 0.100 2.500

Split level – no basement 1,005 0.421 0.286 0.105 2.493

Split level – basement 1,807 0.435 0.230 0.102 2.463

Mobil home 2,283 0.636 0.378 0.102 2.474

All categories 122,597 0.435 0.253 0.100 2.500

The risk-based analysis procedure is illustrated through an example (see Chap. 9, USACE, 1996) in which the preformance
measures of several flood damage reduction plans for the metropolitan Chester Creek Basin in Pennsylvania are examined.
Results of the risk-based analysis for each plan are shown in Table 7.14 a–c. Note that from Table 7.14, there are four

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
alternative levee heights being considered and the mixed measure consists of channel modification and detention. Results of
risk–based analysis are shown in Table 7.14a which clearly indicates that the levee plan by building a 8.23m dike is the most
cost effective. The median annual exceedance probability shown in second column of Table 7.14b is close to the result of
conventional flood frequency analysis without considering any other uncertainties but natural randomness of the floods.
Compared with the third column, it is clearly observed that the annual expected exceedance probability is higher than the
corresponding one without considering uncertainty. Consequently, the longçterm failure probabilities will be under–estimated if
other uncertainties in flood frequency relationship are not accounted for. In Table 7.14c, conditional annual nonexceedance
probabilities for each plan under a 50–, 100–, and 250– year event have also indicated the supreiority of levee plan over the
other flood damage reduction plans in terms of failure probability.

From all economic and non economic indicators used in this risk-based analysis, it appears that the levee height of 8.23m is the
most desirable alternative in flood damage reduction for Chester Creek Basin, Pennsylvania. Of course, there may be other
issues that may have be to be considered, such as impacts of levee on environment, aesthetics, and giving the public a false
sense of security, before a final decision is made. Irrespect of some incompleteness of the current state–of–the–art of risk-
based analysis, the procedure does make an advancement over the conventional procedure by explicitly facing and dealing the
uncertainties in design and analysis of hydraulic structures, rather than using a obscure factor of safety. The risk-based
procedure provides more useful information for engineers to make better and more scientifically defensible design and
analysis.

Table 7.14 Performance Measures from Risk-Based Analysis of Flood Damage Reduction Plans for Chester Creek Basin,
Pennsylvania

(a) Present Economic Benefits of Alternatives

Plan Annual With- Project Residual Annual Innudation Reduction Annual Cost, Annual Net Benefit,
Damage, $1000's Benefit, $1000's $1000's $1000's

Without project 78.1 0.0 0.0 0.0

6.68 m levee 50.6 27.5 19.8 7.7

7.32 m levee 39.9 38.2 25.0 13.2

7.77 m levee 29.6 48.5 30.6 17.9

8.23 m levee 18.4 59.7 37.1 22.6

Channel 41.2 36.9 25.0 11.9


modification

Detention basin 44.1 34.0 35.8 –1.8

Mixed measure 24.5 53.6 45.6 8.0

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
(b) Annual Exceedance Probability and Long–Term Risk

Plan Median Estimate of Annual Annual Exceed. Probability with (Long–Term (Long–Term (Long–Term
Exceed. Probability Uncertainty Analysis Risk) 10 yr Risk) 25 yr Risk) 50 yr

6.68 m levee 0.010 0.0122 0.12 0.26 0.46

7.32 m levee 0.007 0.0082 0.08 0.19 0.34

7.77 m levee 0.004 0.0056 0.05 0.13 0.25

8.23 m levee 0.002 0.0031 0.03 0.08 0.14

Channel 0.027 0.0310 0.27 0.55 0.79


modification

Detention 0.033 0.0380 0.32 0.62 0.86


basin

Mixed 0.014 0.0160 0.15 0.33 0.55


measure

(c) Conditional Non–Exceedance Probability

Plan Probability of Annual Event

Source: From USACE (1996).

0.02 0.01 0.004

6.68 m levee 0.882 0.483 0.066

7.32 m levee 0.970 0.750 0.240

7.77 m levee 0.990 0.896 0.489

8.23 m levee 0.997 0.975 0.763

Channel modification 0.248 0.019 0.000

Detention basin 0.205 0.004 0.003

Mixed measure 0.738 0.312 0.038

7.5. REFERENCES
1. Ang, A. H. S. “Structural Risk Analysis and Reliability—Basd Design,” Journal of Structural Engineering Division, American
Society of Civil Enginners, 99(9):1891—1910, 1973.
2. Ang, A. H. S., and C. A. Cornell, “Reliability Bases of Structural Safety and Design,” Journal of Structural Engineering,
American Society of Civil Engineers, 100(9):1755—1769, 1974.
3. Berthouex, P.M. “Modeling Concepts Considering Process Performance, Variability, and Uncertainty,” in: Mathematical
Modeling for Water Pollution Control Processes, T. M. Keinath
4. and M. P. Wanielista, eds,. Ann Arbor Science, Ann Arbor, MI., 1975, pp. 405—439.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
5. Borgman, L. E., “Risk Criteria.”, Journal of Waterways and Harbors Div., American Society of Civil Engineers, 89(WW3): 1—
35, 1963.
6. Breitung, K., “Asymptotic Approximations for Multinormal Integrals.”, Journal of Engineering Mechanics, American
Society of Civil Engineers, 110(3): 357—366, 1984.
7. Cheng, S. T. “Statistics on Dam Failures,” in Reliability and Uncertainty Analysis in Hydraulic Design, American Society of
Civil Engineers, (ASCE), New York, 1993, pp. 97—106.
8. Cheng, S. T., B. C., Yen, and W. H.,Tang, “Sensitivity of Risk Evaluation to Coefficient of Variation,”Stochastic and Risk
Analysis in Hydraulic Engineering, Water Resources Publications, Littleton, CO, 1986, pp. 266—273.
9. Chowdhury, J. U., and J. R. Stedinger, “Confidence Interval for Design Floods with Esitmated Skew Coefficient,” Journal of
Hydraulic Engineering, American Society of Civil Engineers, 117(7):811—831, 1991.
10. Cornell, C. A. “A Probability–Based Structural Code,” Journal of American Concrete Institute, 66(12): 974—985, 1969.
11. Der Kiureghian, A., Lin, H. Z., and Hwang, S. J., “Second order Reliability Approximation,” Journal of Engineering
Mechanics, American Society of Civil Enginners. 113(8): 1208—11225, 1987.
12. Der Kiureghian, A., and P. L., Liu, “Structural Reliability Under Incomplete Probability Information,” Journal of Engineering
Mechanics, American Society of Civil Engineers. 112(1):85—104, 1985.
13. Ditlevsen, O., “Principle of Normal Tail Approximation.”, Journal of Engineering Mechanics, American Society of Civil
Engineers, 107(6): 1191—1208, 1981.
14. Freeman, G. E., R. R., Copeland, and M. A. Cowan, “Uncertainty in Stage—Dischage Relationships.” in. Goulter and K. Tickle
eds, Stochastic Hydraulics, ‘96, A. A. Balkema, The Netherlands, 1996.
15. Harbitz, A., “Efficient and Accurate Probability of Failure Calculation by use of the Importance Sampling Technique,”
Proceedings, International Conference on Applications of Statistics and Probability in Soil and Structural Engineering,
University de Firenze, Florence, Italy, 1983.
16. Harr, M. E., “Probabilistic Estimates for Multivariate Analyses,” Applied Mathematical Modelling, 13: 313—318, 1989.
17. Hasofer, A. M. and N. C., Lind, “Exact and Invariant Second—Moment Code Format," Journal of Engineering Mechanics
Div., American Society of Civil Engineers, 100(1): 111—121, 1974.
18. Interagency Advisory Committee on Water Data, “Guidelines for Determining Flood Flow Frequency,” Bulletin 17B. U.S.
Department of Interior, U.S. Geologic Survey, Office of Water Data Coordination, Reston, VA, 1982.
19. Karamchandani, A., “Structural System Reliability Analysis Methods,” Report to Amoco Production Company. Department
of Civil Engineering, Stanford University, 1987.
20. Karmeshu and F., Lara Rosano, “Modelling Data Uncertainty in Growth Forecasts,” Applied Mathematical Modelling, 11:
62—68, 1987.
21. Liu, P. L. and A., Der Kiureghian, “Multivariate Distribution Models with Prescribed Marginals and Covariances,”
Probabilistic Engineering Mechanics, 1(2):105—112, 1986.
22. Madsen, H. O., S., Krenk, and N. C. Lind, Methods of Structural Safety, Prentice—Hall, Englewood Cliffs, N.J. 1986.
23. Mays, L. W., and Y. K., Tung, Hydrosystems Engineering and Management, McGraw—Hill, New York, 1992.
24. McKay, M. D., R. J., Beckman, and W. J. Conovre, “A Comparison of Three Methods for Selecting Values of Input Variables
in the Analysis of Output from a Computer Code,” Technometrics, 21, 1979.
25. Melchers, R. E., Structural Reliability Analysis and Prediction, Ellis Horwood, Ltd., Chichester, UK, 400 pp, 1987.
26. Wen, Y. K. “Approximate Methods for Nonlinear Time—Variant Reliability Analysis.” Journal of Engineering Mechanics,
American Society of Civil Engineers, 113(12): 1826—1839, 1987.
27. Park, C. S., "The Mellin Transform in Probabilistic Cash Flow Modeling," The Engineering Economist, 32(2):115–134,
1987.
28. Patel, J. K., C. H., Kapadia, and D. B. Owen, Handbook of Statistical Distributions, John Wiley Sons, New York, 1976.
29. Plate, E. J. “Stochastic Design in Hydraulics: Concepts for a Broader Application,” in J. T Kuo and G. F. Lin, eds. Stochastic
Hydraulics '92, proceedings 6th IAHR Int'l Symp., Taipei, Water Resources Publications, Littleton, CO., 1992, pp. 1—15.
30. Plate, E. J., and L. Duckstein, “Reliability in Hydraulic Design.” in L. Duckstein and E. J. Plate, eds.,Engineering Reliability
and Risk in Water Resources, Martinus Nijhoff, Dordrecht, The Netherlands, 1987, pp. 27–60.
31. Plate, E. J., and L. Duckstein, “Reliability Based Design Concepts in Hydraulic Engineering.” Water Resources Bulletin,
American water rexrumen association, 24(2): 234—245, 1988.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
32. Pritchett, H. D., “Application of the Principles of Engineering Economy to the Selection of Highway Culverts,” Stanford
University, ReportEEP—13, 1964.
33. Rackwitz, R., “Practical Probabilistic Approach to Design,” Bulletin 112, Comit’e Europe’en du Beton, Paris, France, 1976.
34. Rackwitz, R., and B., Fiessler, “Structural Reliability Under Combined Random Load Sequence,” Computers and Structures,
9:489—494, 1978.
35. Rosenblueth, E., “Point Estimates for Probability Moments,” Proceedings, National Academy of Science, 72(10):3812—
3814, 1975.
36. Rosenblueth, E., “Two–Point Estimates in Probabilities,” Applied Mathematical Modelling, 5:329—335, 1981.
37. Schueller, G. I. and R., Stix, “A Critical Appraisal of Methods to Determine Failure Probabilities.”Report No. 4–86, Institute
fur Mechanik, Universitat Innsbruck, Austria, 1986.
38. Shinozuka, M., “Basic Analysis of Structural Safety,” Journal of Structrual Engineering Div., American Society of Civil
Engineers, 109(3):721–740, 1983.
39. Springer, M. D., The Algebra of Random Variables, John Wiley & Sons, New York, 1979.
40. Stedinger, J. R., "Confidence Intervals for Design Events," Journal of Hydraulic Engineering, American Society of Civil
Engineers, 109(HY1):13—27, 1983.
41. Tung, Y. K., “Effects of Uncertainties on Optimal Risk-Based Design of Hydraulic Structures,” Journal of Water Resources
Planning and Management, American Society of Civil Engineers, 113(5):709—722, 1987.
42. Tung, Y. K., “Mellin Transform Applied to Uncertainty Analysis in Hydrology/Hydraulics,” Journal of Hydraul: Engineer,
American Society of Civil Engineers, 116(5):659—674, 1990.
43. Tung, Y. K. “Uncertainty and Reliability Analysis,” in L.W. Mays, el., in Water Resources Handbook, McGraw—Hill, New
York, 1996.
44. Tung, Y. K. and Y. Bao, “On the Optimal Risk-Based Designs of Highway Drainage Structures,” Journal of Stochastic
Hydrology and Hydraulics, 4(4):311—324, 1990.
45. U. S. Army Corps of Engineers, Ris-Based Ancdysis for Floud damagereduction stadres, em 1110-2-1619, Washington
D.C., Angust 1996.
46. Vrijling, J. K., “Development of Probabilistic Design in Flood Defenses in the Netherlands,” in B. C. Yen and Y. K. Tung,
eds., Reliability and Uncertainty Analysis in Hydraulic Design, American Society of Civil Engineers, New York, 1993, pp.
133—178.
47. Yen, B. C., "Safety Factor in Hydrologic and Hydraulic Engineering Design," in E. A. McBean, K. W. Hipel, and T. E. Unny,
eds., Reliability in Water Resources Management, Water Resources Publications, Littleton, CO, 1979, pp. 389—407.
48. Yen, B. C., and A. H.—S., Ang, “Risk Analysis in Design of Hydraulic Projects," in C. L. Chiu, ed.,Stochastic Hydraulics,
Proceedings of First International Symposium, University of Pittsburgh, Pittsburgh, PA, 1971, pp. 694–701.
49. Yen, B. C., S. T., Cheng, and C. S. Melching, “First Order Reliability Analysis,” in B. C. Yen, ed.,Stochastic, pp. 1.36 and Risk
Analysis in Hydraulic Engineering, Water Resources Publications, Littleton, CO, 1986.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.

You might also like