Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING

Int. J. Numer. Meth. Engng 2007; 72:486–504


Published online 5 March 2007 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nme.2025

Efficient characterization of the random eigenvalue problem


in a polynomial chaos decomposition‡

Roger Ghanem1, ∗, † and Debraj Ghosh2


1 Department of Civil Engineering, University of Southern California, 210 KAP Hall, Los Angeles, CA, U.S.A.
2 Department of Mechanical Engineering, Stanford University, Bldg 500, Stanford, CA, U.S.A.

SUMMARY
A new procedure for characterizing the solution of the eigenvalue problem in the presence of uncertainty
is presented. The eigenvalues and eigenvectors are described through their projections on the polynomial
chaos basis. An efficient method for estimating the coefficients with respect to this basis is proposed.
The method uses a Galerkin-based approach by orthogonalizing the residual in the eigenvalue–eigenvector
equation to the subspace spanned by the basis functions used for approximation. In this way, the stochastic
problem is framed as a system of deterministic non-linear algebraic equations. This system of equations
is solved using a Newton–Raphson algorithm. Although the proposed approach is not based on statistical
sampling, the efficiency of the proposed method can be significantly enhanced by initializing the non-linear
iterative process with a small statistical sample synthesized through a Monte Carlo sampling scheme.
The proposed method offers a number of advantages over existing methods based on statistical sampling.
First, it provides an approximation to the complete probabilistic description of the eigensolution. Second,
it reduces the computational overhead associated with solving the statistical eigenvalue problem. Finally,
it circumvents the dependence of the statistical solution on the quality of the underlying random number
generator. Copyright q 2007 John Wiley & Sons, Ltd.

Received 27 May 2005; Revised 28 December 2006; Accepted 22 January 2007

KEY WORDS: random eigenvalue problem; polynomial chaos; stochastic reduced-order models

∗ Correspondence to: Roger Ghanem, Department of Civil Engineering, University of Southern California, 210 KAP
Hall, Los Angeles, CA, U.S.A.

E-mail: ghanem@usc.edu

A major part of this work was carried out when the second author was a graduate student at the Johns Hopkins
University, Baltimore, MD, U.S.A.

Contract/grant sponsor: AFOSR


Contract/grant sponsor: ONR

Copyright q 2007 John Wiley & Sons, Ltd.


PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 487

1. INTRODUCTION

The observed behaviour of most physical systems exhibits some level of variability the signif-
icance of which grows as the demands on performance and reliability become more stringent.
With advances in sensing and manufacturing technologies, such demands are becoming the norm
rather than the exception. Advances in computational technology, on the other hand, have shed
significant light on the key elements in the manufacturing process that control a system’s behaviour
and performance. The persistent presence of specimen variability and spatial random heterogeneity
have diminished the level of confidence that can be attributed to deterministic model-based predic-
tions, and have adversely affected the role these predictions can play in design and maintenance
of engineered systems. Clearly, a rational procedure for improving predictability in this case is to
place the analysis in a multiscale context and to trace the source of variability to subscale features
that are not typically modeled at the coarse level. While this approach can chase the source of
variability down a hierarchy of scales, it can hardly get rid of it. Moreover, a multiscale description
of a typical engineering problem usually adds layers of complexity to its analysis. An alternative
approach to modeling this variability, is to describe it in the context of probability theory, asso-
ciating measures of plausibility to its various levels. This approach permits the analysis to remain
at the single scale of interest to the designer/analyst while representing, in a rational manner, the
effect of subscale fluctuations. This is the general approach adopted in the present paper.
Of particular interest in the present paper is the characterization of eigenspaces associated with
linear dynamical systems whose dynamical properties have been described as stochastic processes.
Such a model can be viewed as a surrogate representation of observed spatial random fluctuations
in the mechanistic description of the system. The resulting random eigenvalue problem is clearly
of critical significance for dynamical systems in general. The ability to characterize in a useful
manner the probabilistic measure of modal quantities has far reaching implications on the pre-
dicted robustness and performance of the associated system. The ensuing stochastic reduced-order
models are more useful as they capture the variability observed in the complete system. Tradi-
tional characterizations of the solution to the random eigenvalue problem have involved low-order
Taylor expansions obtained through a perturbation formalism or statistical characterizations through
low-order statistics. In the present approach, a characterization is developed of the eigenvalues
and eigenvectors in terms of their co-ordinates in a Hilbert space. This choice for mathematical
characterization lends itself to the efficient and accurate representation of quantities of interest
such as statistical moments of all orders and probability distributions.
For certain problems of growing interest in science and engineering, random matrices are in-
troduced as random perturbations of finite-dimensional operators providing a mathematical frame-
work to describe modeling uncertainty. These random matrices are usually not obtained from a
finite-dimensional representation of a partial differential operator, and in a number of interesting
cases, closed-form expressions of the statistical moments and probability density functions of their
eigenvalues and eigenvectors are available [1, 2]. The matrices of interest in the present paper, on
the other hand, are the result of a finite-dimensional approximation of an underlying continuous
system and their randomness is tied to the uncertainty in the parameters of this system. For
such systems, closed-form expressions are generally not available for the solution of the random
eigenvalue problem. Current approaches to this problem include statistical sampling [3], pertur-
bation techniques [4, 5], and polynomial chaos representations coupled with Galerkin projections
[6, 7]. Methods based on statistical sampling, while conceptually straightforward, are computa-
tionally intensive. More important is the strong dependence of their performance and statistical

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
488 R. GHANEM AND D. GHOSH

accuracy on the performance of the underlying random number generator, and its ability for produc-
ing good-quality independent random numbers. While the perturbation approach yields a concise
algorithm for computing the statistics of eigenvalues and eigenvectors, it becomes quickly unwield-
ing when higher-order perturbations are needed. Moreover, implicit in this class of approaches
is the premise that fluctuations are small enough to be described as perturbations around some
nominal value. The polynomial chaos expansion [8] has already been implemented in the context
of the random eigenvalue problem [6, 9]. In particular, the coefficients in the chaos formulation
have been estimated by fitting a polynomial form to a statistical sample, effectively using the
so-called non-intrusive approach. An initial effort at implementing an approach whereby statistical
samples are completely bypassed has also been reported [7]. The present paper expands on this
last effort, providing the details of its implementations, together with numerical examples and an
analysis of its performance. The main distinction between the surface fitting approach [9] and
the present approach lies in the interpretation of the solution to the random eigenvalue problem.
Specifically, while standard statistical methods provide a pointwise approximation to the solution,
a weak-sense solution is herein obtained. This novel characterization of the solution provides an
alternative perspective on the random eigenvalue problem, both as related to its mathematical
properties as well as its practical significance.
The next section describes the mathematical setting of the problem and provides background on
technical challenges. Following that, a new method for characterizing the solution to the random
eigenproblem is presented. An analysis of the method is then described as regards its consistency
with standard characterization procedures. Procedures for the numerical solution of the resulting
non-linear algebraic system of equations is then presented, which is followed by a numerical
example.

2. PROBLEM DESCRIPTION
Throughout the paper it will be assumed that random matrices are finite-dimensional representations
of infinite-dimensional operators with random coefficients. Consider the eigenvalue problem for
an n-dimensional real symmetric random matrix K()
K()/() = ()/() (1)
with the normalization condition
/()T /() = 1 (2)
where
() ∈ R, /() ∈ Rn , K() ∈ Rn×n , ∈
and (, F, ) is the probability space associated with the underlying physical experiments. The
space of square integrable random variables, L 2 (), forms a Hilbert space, with the norm de-
noted by  ·  L 2 () . Matrix K() could, for example, represent the stiffness matrix in a structural
mechanics problem. The randomness in K is inherited from randomness in the parameters of
the underlying physical system such as elastic and dynamic parameters. In general, some of
these random parameters can be modeled as random variables {i ()}i=m 1
i=1 and some as random
processes (x, ), where x ∈ D, and D denotes the physical domain of the system. Moreover,
these processes can be represented with respect to some basis set in L 2 (, F, ) by relying, for

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 489

example, on the Karhunen–Loève expansion [8]. A finite-dimensional representation of these


processes yields additional random variables on which the matrix K depends. The set of all
random variables, including both {i ()}i=m 1
i=1 , and the variables associated with discretizing the
stochastic processes, can be written as {i ()}i=m 2
i=1 , and it completely characterizes the uncertainty
in the underlying system. In general, these random variables will be characterized by a joint
probability measure that is generally not a Gaussian measure. Through any one of a number of
available procedures [10, 11], numerical algorithms can be developed to express this joint random
vector as a non-linear transformation of an independent Gaussian vector, {i ()}i=mi=1 . This new set
of independent standard random variables will be denoted by an m-dimensional vector n. Thus,
K() can now also be denoted by K(n).
Solving a random eigenvalue problem involves characterizing the probabilistic measure of the
eigenvalues  and eigenvectors / using probabilistic information about matrix K. Polynomial chaos
decompositions [8] provide a rational framework for representing random variables and vectors
with respect to a basis set of orthogonal polynomials. Accordingly, a square integrable random
variable is represented in a Hilbert space with respect to a basis set consisting of multidimensional
Hermite polynomials of orthonormal Gaussian random variables [8, 12, 13]. Random vectors and
processes are represented in suitable product spaces. Extensions to other representations besides
Hermite polynomials have recently been introduced in the literature [14, 15]. Accordingly, the lth
eigenvalue and eigenvector can be expressed in a Polynomial Chaos representation as


(i) 

(i) (i) (i)
l = i l , /l = i /l , l ∈ R, /l ∈ Rn (3)
i=0 i=0

Here, i are the multidimensional Hermite polynomials in standard normal random vector n, with
properties
0 ≡ 1, E{i } = 0 for i>0 and E{i  j } = i j E{i2 }
(i) (i)
where E{·} denotes mathematical expectation. The deterministic coefficients l and /l , termed
the chaos coefficients, completely capture the probabilistic description of the random quantities
involved. The polynomial chaos decomposition of the eigenvalues and eigenvectors presumes that
each eigenvalue is in L 2 () and each eigenvector is in L 2 (, Rn ). This assumption is validated
in Appendix A. Truncating the series at the Pth term results in expansions of the form

P−1
(i) 
P−1
(i) (i) (i)
ˆ l = i l , /̂l = i /l , l ∈ R, /l ∈ Rn (4)
i=0 i=0

It is worth noting the difference between the structure of a deterministic and a random eigen-
problem. In the deterministic case, a typical eigenpair is of the form (l , /l ), where l ∈ R and
/l ∈ Rn , n denoting the size of the matrix K. In the random case, on the other hand, the solution
corresponding to lth physical mode consists of the set
(0) (P−1) (0) (P−1) (i) (i)
{l , . . . , l , /l , . . . , /l } where l ∈ R, /l ∈ Rn

A particular method, referred to hereafter as method I, for estimating the chaos coefficients
(i) (i)
l and /l is a simulation-based method [9]. In method I, the approximation error of the
eigenvalues and eigenvectors is minimized by making them orthogonal to the approximation space.

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
490 R. GHANEM AND D. GHOSH

The orthogonality of the chaos basis then results in the following compact representation:
(i) E{l i } (i) E{/l i }
l = , /l = (5)
E{i2 } E{i2 }
The denominators in the above expressions can be evaluated exactly [8], whereas the numerators are
estimated using Monte Carlo sampling. Consider the realizations of the set of random variables
n, where each realization of n corresponds to a realization of the stochastic physical system.
The corresponding deterministic eigenproblem can be readily solved for each realization. The
(i) (i)
coefficients /l and l are then easily estimated by approximating the inner product using the
statistical average of the samples provided by the realizations. It is noted at this point that these
chaos coefficients are in fact statistics of the random eigenvalues and eigenvectors.
It has been observed that this method for estimating the chaos coefficients depends largely
on the random numbers generated to simulate the realizations, both qualitatively and quantita-
tively [16, 17]. The qualitative dependence requires the generated random variables {i } to be
orthogonal and to follow a standard Gaussian distribution, which is hard to achieve for a finite
size sample. Given the Gaussian character of the random variables involved, this difficulty can be
readily addressed through a Gram–Schmidt orthogonalization procedure. The quantitative depen-
dence requires a very large number of realizations for convergence, especially for estimating the
coefficients of the higher-order polynomials. Since for each realization, an eigenproblem needs to
be solved, the method is computationally intensive, especially for large eigensystems. To over-
come these difficulties associated with method I, another Galerkin-based method is proposed in
this paper. According to this method, the residual of the equation governing the eigenproblem is
minimized by forcing it to be orthogonal to the approximation space. It reduces the problem to a
set of deterministic non-linear equations. In this paper, we will refer to this method as method II.

3. MINIMIZING RESIDUAL IN THE EIGENPROBLEM

The matrix K(n) can itself be approximated by a finite chaos decomposition in the form

L−1
K= i K(i) (6)
i=0
with the (n × n) matrices K(i) representing the chaos coefficients. Substituting this expansion along
with the expansions for  and from Equation (4) into Equation (1) results in
 P−1
L−1   P−1
P−1 
i  j K(i) /( j) = i  j (i) /( j) + r (7)
i=0 j=0 i=0 j=0
where the random vector r denotes a residual that appears due to the finite-order approximation of
the eigenvalues and eigenvectors. For a known value of L the parameter P is chosen, with usually
P>L. The subscript l is eliminated from l and l hereafter, implicitly assuming that the analysis
is carried out for a single physical mode, unless otherwise mentioned. The residual r in the above
equation is minimized by requiring it to be orthogonal to the approximation subspace spanned by
P−1
{k }k=0 , resulting in

 P−1
L−1   P−1
P−1 
E{i  j k }K(i) /( j) = E{i  j k }(i) /( j) , k = 0, . . . , P − 1 (8)
i=0 j=0 i=0 j=0

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 491

which is rewritten in the form

AU = KU (9)

Here U is an n P-dimensional vector, constructed by arranging P number of n-dimensional vectors


/( j) in a single column. The matrices A and K are of dimension (n P × n P), given by
⎡ ⎤

L−1 
P−1
( j)
⎢ K(i)
E{  
i j 0 }/ ⎥
⎢ i=0 j=0 ⎥
⎢ ⎥
⎢ .. ⎥
⎢ ⎥
⎢ . ⎥
⎢ ⎥
⎢ L−1 ⎥
⎢  (i) P−1
 ( j) ⎥

AU = ⎢ K E{i  j k }/ ⎥

⎢ i=0 j=0 ⎥
⎢ ⎥
⎢ .. ⎥
⎢ ⎥
⎢ . ⎥
⎢ ⎥
⎢ L−1 ⎥
⎣  (i) P−1
 ⎦
K E{i  j  P−1 }/( j)
i=0 j=0

⎡ ⎤
K(i) 0 ... 0
⎢ ⎥

L−1 ⎢ 0 K(i) ... 0 ⎥
= ⎢ ⎥ Ci U
⎢ ⎥
i=0 ⎣ 0 0 ... 0 ⎦
0 0 . . . K(i)


L−1
= Bi Ci U (10)
i=0

and
⎡ ⎤

P−1 
P−1
⎢ (i)
E{  
i j 0 }/ ( j)

⎢ i=0 j=0 ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
⎢ P−1 ⎥
⎢  (i) P−1
 ( j) ⎥

KU = ⎢  E{i  j k }/ ⎥

⎢ i=0 j=0 ⎥
⎢ .. ⎥
⎢ ⎥
⎢ . ⎥
⎢ ⎥
⎢ P−1 ⎥
⎣  (i)  E{  
P−1
( j) ⎦
}/ i j P−1
i=0 j=0


P−1
= (i) Ci U (11)
i=0

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
492 R. GHANEM AND D. GHOSH

Here, Ci is a symmetric matrix of dimension (n P × n P) with (n × n)-dimensional submatrices at


location (k, j) given by

[Ci ]k j = E{i  j k }In (12)

In being the n-dimensional identity matrix. The matrices Bi are of size (n P × n P) with the
diagonal submatrices as K(i) and the off-diagonal terms as zero.
Now, Equation (9) can be written as


L−1 
P−1
Bi Ci U = (i) Ci U (13)
i=0 i=0

Recalling the fact that P>L, the above expression can also be written as


P−1
(Bi Ci − i Ci )U = 0, Bi = 0 for i>L (14)
i=0

Similarly, the normalization condition (2) becomes

 P−1
P−1 
E{i  j k }/(i)T /( j) = k0 , k = 0, . . . , P − 1 (15)
i=0 j=0

where i j denotes the Kronecker delta. This equation can also be rewritten as

UT Ck U = k0 (16)

The problem can be viewed from two different perspectives. First, considering the system of
Equations (8) and (15) and looking for P number of n-dimensional vectors /(i) and associated P
scalars (i) . Alternatively, considering Equations (14) and (16) and solving for an n P-dimensional
vector U and associated P scalars (i) . Note that in either case, these sought sets of vectors and
scalars describe the behaviour of a single physical mode only. Thus, according to the second per-
spective, in order to capture the behaviour of n physical modes, n number of n P-dimensional vectors
along with n sets of scalars with each set containing P elements are sought in an n P-dimensional
space. The concept of orthogonalizing the error in the eigenproblem to the approximation subspace
has been previously suggested in conjunction with mitigating assumptions [18]. In this paper, a
different normalization scheme is used, the mathematical structure of the problem is explored,
numerical strategies for fast convergence are suggested, and the advantages and limitations of the
method is discussed in details.

3.1. Consistency analysis


It is restated that methods I and II minimize two different error quantities associated with the
random eigenvalue problem. Thus, it is important to investigate the conditions under which these
two methods yield similar set of solutions. In this section, a consistency analysis is carried out to
address this issue.
The term ‘exact solution’ will be used here to refer to a solution (, /) satisfying both the
eigenvalue problem K()/() = ()/() and the normalization condition /()T /() = 1 almost
everywhere (a.e.). The following proposition implies that if the approximation subspace is suitable

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 493

for approximating the eigenvalues and eigenvectors, then both methods I and II yield consistent
estimates of the chaos coefficients.

Proposition
Let (, F, ) be the probability space where the random eigenvalue problem is defined. Let  and
/ denote a solution pair to the random eigenvalue problem that satisfies the relationship

K()/() = ()/() a.e.; K() ∈ Rn×n , and symmetric


Define the norm in Rn as,

Rn = (
T
)1/2 when
∈ Rn (17)

and the norm of an Rn -valued random variable as


 ·  L 2 (,Rn ) = (E{ · 2Rn })1/2 (18)

Let ˆ and /̂ denote the estimate of a random eigenvalue and a random eigenvector, respectively.

If there exist four positive numbers , , M1 , and M2 , such that

ˆ L ()  
 −  (19)
2

/̂ − / L 2 (,Rn )  (20)

/̂ L 2 (,Rn )  M1 (21)

(E{K − I2 })1/2  M2 , (here  ·  denotes the matrix norm [19]) (22)

then
E{K/̂ − ˆ /̂Rn }M1  + M2 (23)

Proof
Let ˆ and ˆ be as defined above. Then,

K/̂ − ˆ /̂Rn = (K − I)(/̂ − /) + ( − )


ˆ /̂Rn (24)

Using Minkowski inequality

(K − I)(/̂ − /) + ( − ) ˆ /̂Rn


ˆ /̂Rn (K − I)(/̂ − /)Rn + ( − ) a.e. (25)

Thus,
(K/̂ − ˆ /̂)Rn (K − I)(/̂ − /)Rn + ( − )
ˆ /̂Rn a.e. (26)
Now the operator (K − I) is finite-dimensional for all  ∈ , thus it is bounded a.e. Thus [19],

(K − I)(/̂ − /)Rn K − I/̂ − /Rn a.e. (27)

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
494 R. GHANEM AND D. GHOSH

Now K − I and (/̂ − /)Rn are non-negative real-valued random variables. Thus [20],
E{(K − I)(/̂ − /)Rn }E{K − I/̂ − /Rn } (28)
Using Cauchy–Schwarz inequality [19],
|E{K − I/̂ − /Rn }|(E{K − I2 })1/2 (E{/̂ − /2Rn })1/2 (29)
From Equations (28) and (29)
E{(K − I)(/̂ − /)Rn }(E{K − I2 })1/2 (E{/̂ − /2Rn })1/2 (30)
which is equivalent to
E{(K − I)(/̂ − /)Rn }M2 (31)
Now,

ˆ /̂Rn } = E{| − |


E{( − ) ˆ /̂Rn }

ˆ 2 })1/2 (E{/̂2 n })1/2


 (E{| − | (Using Cauchy–Schwarz inequality)
R

 M1 (32)

E{(K/̂ − ˆ /̂)Rn }  M1  + M2 (33)


The results of the above proposition can be further refined under the assumption that the variance
of (K/̂ − ˆ /̂)Rn is bounded above by a positive number M3 . Specifically, under that assumption,
it readily follows from the previous proof that

(K/̂ − ˆ /̂) L 2 (,Rn )  (M1  + M2 )2 + M3 (34)
The above analysis provides a sufficient condition for the solutions obtained through methods I
and II to be consistent with each other. Thus, assuming that condition (22) is satisfied, consider
ˆ /̂} that makes the error quantities associated with method I very small
an approximation pair {,
(Equations (19) and (20)), satisfies condition (21), and ensures that the variance of (K/̂ − ˆ /̂)Rn
is bounded above by a small positive number M3 . This same approximation pair then also bounds
the error associated with method II as indicated by Equation (34). In the context of a series
representation such as PCE, the implications are as follows: if the chosen basis set achieves a good
approximation through method I along with satisfying conditions (21) and keeping the variance of
(K/̂− ˆ /̂)Rn very small, then the same basis set is also expected to achieve a good approximation
through method II.

4. SOLUTION TO THE SYSTEM OF EQUATIONS

The random eigenvalue problem is posed as a set of (n P + P) non-linear deterministic equations


for each physical mode of the stochastic system. To solve the system of non-linear equations, two

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 495

basic approaches are identified here. The first one is to look at the problem as simultaneously
solving (n P + P) non-linear algebraic equations using, for example, a Newton–Raphson (NR)
algorithm. Another approach is to treat the problem as a norm minimization problem and solve it
by a suitable optimization algorithm. In this paper, the first method, that is, solving by NR method
is described and used for the numerical study. The norm minimization approach is mentioned
merely as another possible direction. In both methods, the problem is considered as looking for
an optimal point in an R(n P+P) space.

4.1. A Newton–Raphson approach


The first method adopted here to solve the deterministic set of equations is the NR method. In this
method, a system of non-linear equations is solved iteratively. It shows quadratic convergence near
the actual solution point. According to this method, all the (n P + P) equations, (8) and (15) are writ-
ten in the form F(x) = 0 where x is a vector containing the set {(0) , . . . , (P−1) , (0) , . . . , (P−1) }.
Expanding F(x) in a Taylor series around x yields

F(x + x) = F(x) + J · x + O(x 2 )

where
*Fi
Ji j ≡
*x j

Searching for the zeros of F(x), impose F(x +x) = 0. Neglecting the higher-order terms results in,

J · x = −F (35)

which can be computed for x. An updated estimate of x is then obtained as

x new = xold + x (36)

Clearly, numerical evaluation of the Jacobian is required in order to solve Equation (35). From
Equation (8), these can be expressed as

*Fk 
P−1
(i)
=− E{i  j k }/( j) , k = 0, . . . , P − 1, i = 0, . . . , P − 1 (37)
* j=0

*Fk 
L−1 
P−1
= E{i  j k }K(i) − E{i  j k }(i) In
*/( j) i=0 i=0

k = 0, . . . , P − 1, j = 0, . . . , P − 1 (38)

Here, Fk is the set of n functions from Equation (8) corresponding to some k. It is to be noted
that Equation (38) is the partial derivative of a vector Fk with respect to another vector /( j) and
is expressed in a matrix form where (i, l)th term of the matrix represents derivative of the ith
function of the set Fk with respect to the lth term of the vector /( j) , denoted below as /( jl) .

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
496 R. GHANEM AND D. GHOSH

From Equation (15),

*Fk
= 0, k = 0, . . . , P − 1, j = 0, . . . , P − 1 (39)
*( j)
*Fk 
P−1
( jl)
=2 /(il) E{i  j k }, k = 0, . . . , P − 1, j = 0, . . . , P − 1 (40)
*/ i=0

Detailed derivations of Equations (39) and (40) are presented in Appendix B. The Jacobian J
is calculated from Equations (37)–(40), and then substituted in Equation (35) to find x. Finally
from Equation (36), xnew , the new estimate of the solution is computed. The whole procedure is
iterated until a stopping criteria is satisfied, which is imposed here using the quantity F(x).
Equation (35) is a set of (n + 1)P linear equations involving (n + 1)P variables. It can be solved
either by direct methods such as Gaussian elimination, LU factorization or by indirect methods such
Jacobi, Gauss–Seidel, GMRES. Judicious selection of an appropriate method may significantly
reduce the computational overhead.

4.2. A norm minimization approach


In this approach, the Euclidean norm of the residual of Equation (14) is minimized in the form
⎛ ⎞

P−1
⎜  ⎟
((0) , . . . , (P−1) ∈ R, U ∈ Rn P ) = arg ⎝ minimize (Bi Ci − (i) Ci )U ⎠ (41)
(0)
 ,...,(P−1)
∈ R i=0
U ∈ Rn P

subject to the given constraints (16). Although it can be expected that the global minimum of
the optimization problem will coincide with the solution to (14) and (16), in general there is no
guaranteed globally convergent method. However, techniques such as genetic algorithm or hybrid
methods can be used to efficiently explore the optimization space.

5. STATISTICS OF THE EIGENVALUES AND EIGENVECTORS

Using the estimated chaos coefficients, statistical moments of the eigenvalues and eigenvectors
can be readily computed, and statistical realizations can be synthesized to estimate the probability
density functions. In particular, the first- and second-order statistical moments of the eigenvalues
are obtained as

P−1
(i) 2
¯ l ≡ E{ˆ l } = l(0) , E{(ˆ l − ¯ l )2 } = E{i2 }(l ) (42)
i=1

Statistics of the modal vectors can be described using statistical modal interaction [21, 22].
Accordingly, a statistical physical mode of an n degrees-of-freedom (dof) system can be expressed
as a linear combination of the deterministic physical modes as

n
/̂l = eil /̄i , l = 1, . . . , n (43)
i=1

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 497

where the random variable eil represents the random contribution of the ith deterministic mode to
the lth mode of the stochastic system. Physically eil represents the behaviour of the modes of the
stochastic system in terms of the modes of the nominal or mean system. One example where this
representation will be useful is behaviour prediction of stochastic systems when the behaviour of
the nominal system is known. This representation can also be used in efficient and accurate system
reduction. It is assumed here that the deterministic and probabilistic eigenvectors are normalized
according to the same scheme. The first two moments of eil can be obtained as


P
E{eil } = E{k }Cikl = Ci0l (44)
k=0

and

P
E{(eil )2 } = E{2k }(Cikl )2 (45)
k=0

Where Cikl is the projection of the kth chaos component of the lth mode of the stochastic system
on the ith mode of the deterministic or mean system, computed by

(k) 
n
/l = Cikl /̄i , l = 1, . . . , n, k = 0, . . . , P (46)
i=1

The quantity E{(eil )2 } will be used in this paper to characterize the modal interaction.

6. NUMERICAL STUDY

In order to demonstrate method II with a numerical example, a 3-storey building frame (see Figure 1)
with two translational and a torsional dof at each storey level is selected [23]. The frame is divided
into six bays in one direction and four bays in another. In each direction the spacing of the columns
are 6 m. The frame is outfitted with linear restoring devices or springs. The total restoring force in

Figure 1. The 9-dof frame considered for numerical study.

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
498 R. GHANEM AND D. GHOSH

1 4@6m

6@6m

Figure 2. Plan view of the frame; location of the restoring devices are mentioned by five thick segments.

Table I. Eigenvalues of the mean (deterministic) system.


Mode 1 2 3 4 5 6 7 8 9

Eigenvalue 141.16 202.03 1108.23 1586.13 2314.15 3312.08 19 264.97 151 246.49 315 824.75

each floor is the sum of the restoring force from the columns and from the five restoring devices
placed in each floor at the spatial locations shown in Figure 2. The restoring devices are assumed
to be the source of uncertainty in the system. The stiffnesses of the two devices are assumed to
be independent random variables, specified, respectively, in the form [24]
1 2
kr 1 = k̄r 1 + √ (21 − 1), kr 2 = k̄r 2 + √ (22 − 1) (47)
2 2
where kr 1 and kr 2 denote the device stiffness in directions 1 and 2, respectively, k̄r 1 and k̄r 2 are
their respective mean values, 1 and 2 their standard√ deviations, 1 and 2 are two independent
standard normal random variables. A constraint i / 2<k̄ri , i = 1, 2 ensures positive definiteness
of the stiffness matrix a.e. Thus, in Equation (6), the value of L is 6. The non-zero contributions
to the stiffness matrix from the frame columns are given by, K11 = 300, K14 = −150, K22 = 300,
K25 = −150, K33 = 64 800, K36 = −32 400, K44 = 300, K47 = −150, K55 = 300, K58 = −150,
K66 = 64 800, K69 = −32 400, K77 = 150, K88 = 150, K99 = 32 400, where K33 , K66 , K99 are
in MN m and others are in MN/m. Moreover, the mean stiffness of the devices in directions 1
and 2 are k̄r 1 = k̄r 2 = 300 MN/m. The total mean contribution of the five devices in each floor
is therefore given by, k11 = 600 MN/m, k22 = 900 MN/m, k33 = 64 800 MN m, k13 = −1800 MN,
k23 = 1800 MN.
Here, the eigenanalysis of the stiffness matrix is performed not accounting for the mass matrix.
All the units in the stiffness matrix elements are in MN or MN-m. This analysis can be extended
to general buckling or dynamic analysis by taking into account the geometric stiffness matrix
or the mass matrix. The eigenvalues of the deterministic mean system are shown in Table I. The
random eigenvalues and eigenvectors are represented in fourth-order chaos expansion. Fourth-order

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 499

x 104
4
λ9
λ8
3.5

2.5
σλ

1.5

0.5
3 4 5 6 7
No of realizations ( log10 scale )

Figure 3. Convergence of method I, measured by convergence of  .

expansion in chaos polynomials of two independent standard normal random variables corresponds
to P = 15. Using both methods I and II, the chaos coefficients are evaluated for the random system
with standard deviations of restoring device stiffness in directions 1 and 2 as 1 = 2 = 10% of
their mean values.
First, the chaos coefficients are estimated using method I. The following procedure is used to
find the number of realizations needed to obtain a good estimate of the chaos coefficients. The
coefficients are estimated using different sample sizes; the standard deviations of the eigenval-
ues are computed from the corresponding chaos expansion using Equation (42). Convergence of
the standard deviations are considered to be the convergence criterion for method I. To show
the typical nature of convergence, standard deviation of the eighth and the ninth eigenvalues are
plotted with respect to the sample size in Figure 3. A converged behaviour is observed around
sample size of one million. Hence, the standard deviations corresponding to the sample size of
ten million can safely be assumed to be the exact estimates and thus can be used as reference.
Next, a tolerance level of 1% from this reference is set for all modes. Following a detailed numer-
ical study, it is observed that at least 150 × 103 realizations are required to achieve this level of
confidence.
The chaos coefficients are then estimated using method II. To start the NR iterations, the initial
iterate is chosen as follows: the coefficients of the polynomials of up to 2nd order are chosen as
the coefficients estimated by method I using 100 realizations. The higher-order coefficients are
chosen to be zero. The computational times required by the methods I and II for estimating the
chaos coefficients for all the physical modes are 204 s and 34 s, respectively. The computational
time mentioned here involves using Matlab [25] on a computer with 1.13 GHz PIII processor. The
mentioned cpu time required for method II (34 s) is the total time required for the 100 simulations
and the NR iterations. The number of realizations used for the simulation-based chaos expansion is
150 × 103 . For method II, the stopping criterion of the NR iterations is dictated by the proximity of

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
500 R. GHANEM AND D. GHOSH

4
Mode 7
Mode 8
2 Mode 9

||F|| (log10 scale)


−2

−4

−6

−8

−10

−12
0 1 2 3 4
Iteration Number

Figure 4. Convergence of NR iterations, method II.

Table II. Statistical moments of the eigenvalues computed by two methods.

¯ 
Mode Method I Method II Method I Method II

1 140.93 140.93 9.63 9.65


2 202.18 202.18 16.75 16.77
3 1106.41 1106.44 75.61 75.76
4 1586.77 1587.29 127.72 131.68
5 2310.88 2310.41 158.00 158.19
6 3314.52 3314.49 274.64 274.98
7 19 265.05 19 265.05 1092.44 1091.63
8 151 247.09 151 247.14 8576.57 8570.25
9 315 826.01 315 826.11 17 909.13 17 895.94

an error quantity to zero. The error is measured as the norm of the residual of the set of equations
F(x) = 0. In this example we stopped our iterations when this error was below 10−9 .
Figure 4 shows the convergence of the iterations of method II. In this figure, the iteration number
zero refers to the errors prior to starting the NR iterations. The fast convergence is inherited from
the quadratic convergence property of the NR method in the vicinity of the actual solution. Table II
represents the mean and the standard deviations of the eigenvalues computed by method I with
ten million realizations, and by method II. Both methods I and II provided similar estimate of
the mean eigenvectors. Statistical modal interactions, represented here by E{(eil )2 }, are computed
using the chaos coefficients estimated from methods I and II. It is observed that the computed
E{(eil )2 } from both methods coincided; they are presented in Table III. Thus, both of the methods
yielded similar estimates of the first two statistical moments of the eigenvalues and eigenvectors.

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 501

Table III. Statistical modal interaction, measured by E{(eil )2 }: Both


methods I and II yielded similar results.
0.98 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.02 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.02 0.98 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.98 0.02 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.98 0.02 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.02 0.98 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00

7. CONCLUSION

A novel method is presented for characterizing the random eigenvalue problem. The method relies
on describing the stochastic quantities using their polynomial chaos decompositions. The method
is compared to another, sampling-based method, for estimating the chaos coefficients. A Newton–
Raphson (NR) procedure for solving the resulting non-linear algebraic equations is used to avoid
the problem of local minima and to take the advantage of its quadratic convergence property near
the actual solution. However, selection of a good starting point for the NR iterations is crucial.
For solving the linear system of equations arising in the NR loops, Gaussian elimination method
is currently used. Developing or choosing a better algorithm to solve the system may improve
the performance. Apart from the NR method, some other global optimization technique could be
used as well to solve the system of equations. Choosing an appropriate optimization strategy can
empower this approach to use available large-scale optimization tools.
The polynomial chaos characterization of the eigensolution presents a number of advantages,
including the ability to analytically investigate the stochastic modal interaction. Moreover, since
both eigenvalues and eigenvectors are described in terms of the same set of basic random variables
(n) the joint probabilistic characterization of these quantities is implicit in their polynomial chaos
description.

APPENDIX A

Proof that eigenvalues and eigenvectors are square-integrable.


Eigenvalues: Our assumption is that all the elements Ki j of the real symmetric (n × n) matrix
K() are finite-order polynomials in the set of standard normal random variables {i } (some of
them may be constant). Now


n 
n 
n 
n
Ki2j = tr(K2 ) = i (K2 ) = i2 (K) (A1)
i=1 j=1 i=1 i=1

n n 2
Since all Ki j are square-integrable, thus the integral of i=1 j=1 Ki j is finite, accordingly the
n
integral of i=1 i2 (K) is finite. Thus, i (K) ∈ L 2 () for any index i.

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
502 R. GHANEM AND D. GHOSH

Eigenvectors: We assume
i ()T i () = 1 (A2)
If we consider the above equality to hold a.e. then of course i are in L 2 .

APPENDIX B

Derivatives of Equation (15) are computed in this section.


Let us denote
 P−1
P−1 
Fk = E{i  j k }/(i)T /( j) − k0 (B1)
i=0 j=0

where i j denotes the Kronecker delta. Since this expression does not contain any ( j) term, then

*Fk
=0 (B2)
*( j)
which is Equation (39).
Let us denote i jk = E{i  j k }. Expanding Fk , we get

 P−1
P−1 
Fk = i jk [/(i1) /( j1) + /(i2) /( j2) + · · · + /(il) /( jl) + · · · + /(in) /( jn) ] − k0 (B3)
i=0 j=0

here /( jl) denotes the lth element of the vector /( j) . Pulling out only the elements involving /(·l) ,
that is, only the lth elements of the vectors, and denoting the new series as G k ,
 P−1
P−1 
Gk = i jk /(il) /( jl) (B4)
i=0 j=0

Note that
*Fk *G k
( jl)
= (B5)
*/ */( jl)
Expanding the series G k


P−1
Gk = [ i0k /(il) /(0l) + i1k /(il) /(1l) + · · · + i(P−1)k /(il) /((P−1)l) ]
i=0

= 00k /(0l) /(0l) + 01k /(0l) /(1l) + · · · + 0(P−1)k /(0l) /((P−1)l)

+ · · · + (P−1)0k /((P−1)l) /(0l) + (P−1)1k /((P−1)l) /(1l)

+ · · · + (P−1)(P−1)k /((P−1)l) /((P−1)l)

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
PC CHARACTERIZATION FOR RANDOM EIGENPROBLEM 503

The partial derivative of G k with respect to any element, for example /(0l) is

*G k
(0l)
= 2 00k /(0l) + 01k /(1l) + · · · + 0(P−1)k /((P−1)l)
*/

+ 10k /(1l) + · · · + (P−1)0k /((P−1)l) (B6)

Using the symmetry of the triple product E{i  j k }, that is the property i jk = jik , the above
expression becomes

P−1
=2 i0k /(il) (B7)
i=0

Thus, in general

*Fk *G k 
P−1
( jl)
= ( jl)
=2 /(il) E{i  j k } (B8)
*/ */ i=0

which is Equation (40).

ACKNOWLEDGEMENTS
The financial support of AFOSR and ONR is gratefully acknowledged.

REFERENCES
1. Mehta ML. Random Matrices (3rd edn). Elsevier: Amsterdam, 2004.
2. Soize C. Random matrix theory for modeling uncertainties in computational mechanics. Computer Methods in
Applied Mechanics and Engineering 2005; 194(12–16):1333–1366.
3. Shinozuka M, Astill CJ. Random eigenvalue problems in structural analysis. AIAA Journal 1972; 10(4):456–462.
4. Collins JD, Thomson WT. The eigenvalue problem for structural systems with statistical properties. AIAA Journal
1969; 7(4):642–648.
5. Hart GC, Collins JD. The treatment of randomness in finite element modeling. SAE Shock and Vibrations
Symposium, Los Angeles, CA, October 1970; 2509–2519.
6. Ghosh D, Ghanem R. Random eigenvalue analysis of an airframe. Proceedings of the 45th AIAA/ASME/
ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Palm Springs, CA, 19–23 April
2004.
7. Ghosh D, Ghanem R. A new algorithm for solving the random eigenvalue problem using polynomial chaos
expansion. Proceedings of the 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials
Conference, Austin, TX, 18–21 April 2005.
8. Ghanem R, Spanos PD. Stochastic Finite Elements: A Spectral Approach (Revised edn). Dover: New York, 2003.
9. Red-Horse J, Ghanem R. Polynomial chaos representation of the random eigenvalue problem. 40th Structures,
Structural Dynamics, and Materials Conference, St. Louis, MS, 12–15 April 1999.
10. Ghanem R, Doostan A. On the construction and analysis of stochastic predictive models: characterization and
propagation of the errors associated with limited data. Journal of Computational Physics 2006; 217(1):63–81.
11. Das S, Ghanem R, Spall J. Asymptotic Sampling Distribution for Polynomial Chaos Representation of Data:
A Maximum Entropy and Fisher Information Approach. 45th IEEE Conference on Decision and Control,
San Diego, CA, 2006.
12. Wiener N. Homogeneous chaos. American Journal of Mathematics 1938; 60(4):897–936.

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme
504 R. GHANEM AND D. GHOSH

13. Cameron RH, Martin WT. The orthogonal development of non-linear functionals in series of Fourier–Hermite
functionals. Annals of Mathematics 1947; 48(2):385–392.
14. Xiu D, Karniadakis G. Modeling uncertainty in flow simulations via generalized polynomial chaos. Journal of
Computational Physics 2003; 187(1):137–167.
15. Le Maitre OP, Najm H, Ghanem R, Knio O. Multi-resolution analysis of Wiener-type uncertainty propagation
schemes. Journal of Computational Physics 2004; 197(2):502–531.
16. Ghanem R, Ghosh D. Eigenvalue analysis of a random frame. EURODYN2002, Munich, 2–5 September 2002;
341–346.
17. Ghosh D, Ghanem R, Petit C. Stochastic buckling of a joined wing. Stochastic Dynamics Conference, Hangzhou,
China, 26–28 May 2003.
18. Dessombz O, Diniz A, Thouverez F, Jézéquel L. Analysis of stochastic structures: perturbation method and
projection on homogeneous chaos. Proceedings of the IMAC XVII, Kissimmee, FL, 1999.
19. Naylor AW, Sell GR. Linear Operator Theory in Engineering and Science. Springer: Berlin, 2000.
20. Billingsley P. Probability and Measure (3rd edn). Wiley-Interscience: New York, 1995.
21. Ghanem R, Ghosh D. Modal interaction of random dynamical systems. Twenty Second International Modal
Analysis Conference (IMAC-XXII), SEM, Inc., Dearborn, MI, January 2004.
22. Ghosh D, Ghanem R, Red-Horse J. Analysis of eigenvalues and modal interaction of stochastic systems. AIAA
Journal 2005; 43(10):2196–2201.
23. Schueller GI, Pradlwarter HJ, Vasta M, Harnpornchai N. A benchmark study of nonlinear stochastic dynamical
systems. Proceedings of the 7th International Conference on Structural Safety and Reliability, Kyoto, 1997;
355–362.
24. Desceliers C, Ghanem R, Soize C. Polynomial chaos representation of a stochastic preconditioner. International
Journal for Numerical Methods in Engineering 2005; 64(5):618–634.
25. http://www.mathworks.com/products/matlab/

Copyright q 2007 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2007; 72:486–504
DOI: 10.1002/nme

You might also like