Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, VOL.

5, 283-292 (1977)
IMPROVED NUMERICAL DISSIPATION FOR TIME INTEGRATION
ALGORITHMS IN STRUCTURAL DYNAMICS
HANS M. HILBER, THOMAS J. R. HUGHES AND ROBERT L. TAYLOR
Division of Structural Engineering and Structural Mechanics, Department of Civil Engineering
and Lawrence Berkeley Laboratory, University of California, Berkeley, California, U.S. A.
SUMMARY
A new family of unconditionally stable one-step methods for the direct integration of the equations of structural
dynamics is introduced and is shown to possess improved algorithmic damping properties which can be
continuously controlled. The new methods are compared with members of the Newmark family, and the
Houbolt and Wilson methods.
INTRODUCTION
In many structural dynamics applications only low mode response is of interest. For these cases the use of
implicit unconditionally stable algorithms is generally preferred over conditionally stable algorithms.
Conditionally stable algorithms require that the size of the time step employed be inversely proportional
to the highest frequency of the discrete system. In practice this is a severe limitation as accuracy in the lower
modes can be attained with time steps which are very large compared with the period of the highest mode.
For unconditionally stable algorithms a time step may be selected independent of stability considerations
and thus can result in a substantial saving of computational effort.
I n addition to being unconditionally stable, when only low mode response is of interest it is often
advantageous for an algorithm to possess some form of numerical dissipation to damp out any spurious
participation of the higher modes. Examples of algorithms commonly used in structural dynamics which
possess these properties are Houbolts method: the Wilson &method2 and the Newmark family of methods
restricted to parameter values of y > and /3 2 (y ++)2/4; see Reference 3. Another unconditionally stable
method of some interest has been proposed by Park.4 This method has characteristics similar to the Wilson
and Houbolt methods in linear problems, but we have not yet had an adequate opportunity to study it in
detail and thus have not included it in our comparisons.
The Newmark family of methods allows the amount of dissipation to be continuously controlled by a
parameter other than time step. For example, set , ! 3 =( ~ + + ) ~ / 4 and y>&; then the amount of dissipation,
for a fixed time step, is increased by increasing y. On the other hand, the dissipative properties of this family
of algorithms are considered to be inferior to both the Houbolt and the Wilson methods, since the lower
modes are affected too strongly. (It seems all of these algorithms adequately damp the highest modes;
see Bathe and Wilson.2)
I n the Wilson method, 0 must be selected greater than or equal to 1.37 to maintain unconditional stability.
It is recommended in Reference 2 that 0 =1-4 be employed as further increasing 0 reduces accuracy and
further increases dissipation; but even for 6 =1.4 the method is highly dissipative. For example, the commonly
used rule-of-thumb for non-dissipative algorithms requires at least ten time steps per period be taken for
accuracy. Employing this rule-of-thumb, weconclude from results presented in Reference 2 that the trapezoidal
rule ( p =4 and y =8 in the Newmark family), which is non-dissipative, would result in a period elongation
of - 3 per cent and no amplitude decay, which is acceptable in many engineering situations. On the other
hand, Wilsons method, with 0 =1.4, results in - 5 per cent period elongation and N 7 per cent amplitude
Received 22 April 1976
Revised 14 June 1976
@ 1977 by John Wiley & Sons, Ltd.
283
284 H. M. HILBER, T. J. R. HUGHES AND R. L. TAYLOR
decay. From this we conclude that the Wilson method is generally too dissipative in the lower modes,
requiring a time step to be taken that is smaller than that needed for accuracy.
Houbolts method is even more highly dissipative than Wilsons method and does not permit parametric
control over the amount of dissipation present. Thus despite its shortcoming, the Wilson method is considered
by many to be the best available unconditionally stable one-step algorithm when numerical dissipation is
desired.
Since it seemed that the commonly used unconditionally stable, dissipative algorithms of structural
dynamics all possessed some drawbacks, a research effort was undertaken to see if an improved one-step
method could be constructed. The requirements of the desired algorithm were delineated as follows:
1. I t should be unconditionally stable when applied to linear problems.
2. It should possess numerical dissipation which can be controlled by a parameter other than the time
3. The numerical dissipation should not affect the lower modes too strongly.
We have been able to develop an algorithm which achieves the above requirements and this paper is
devoted to a description of its properties.
In the section Analysis wedefine and analyse a three-parameter family of algorithms which contains the
Newmark family. A new form of dissipation, called a-dissipation, is introduced by way of these algorithms.
The new one-parameter family of methods which is advocated here is a subclass contained in the three-
parameter family.
In the section Comparison of dissipative algorithms the unfavourable algorithmic dissipation possessed
by the Newmark family is demonstrated. Furthermore, we show that a-dissipation is similar to linear
viscous damping and, in itself, is ineffective in the higher modes. The dissipation of our new algorithms,
which consists of a combination of positive Newmark y-dissipation and negative a-dissipation, is shown to
have improved characteristics. Results of a stability analysis of the new family are presented and its
algorithmic damping ratio and relative period error are shown to compare favourably with those of the
Wilson and Houbolt methods.
step. In particular, no numerical dissipation should be possible.
The present developments are summarized in the section Conclusions.
ANALYSIS
Consider the linear undamped matrix equations of structural dynamics
Mii+Ku =F
where M is the mass matrix, K is the stiffness matrix, F is the vector of external forces (a given function of
time), u is the displacement vector and superposed dots signify time differentiation (e.g. u =d2 u/dt2 is the
acceleration vector). The initial value problem for (1) consists of finding a function u =u(t), where
t E [0, TI, T >0, satisfying (1) and the initial conditions:
I
~(0) =d
h(0) =v
We are interested in obtaining approximate solutions of (1) by one-step difference methods. To this end
consider the family of algorithms defined by the following relations:
Ma,+l +(1 +a) Kd,n+l - aKd, =F,+l, n E (0, 1, . . . , N- l} (3a)
do =d
v, =v
a, =M-l(F,, - Kd,)
TIME INTEGRATION ALGORITHMS IN STRUCTURAL DYNAMICS 285
where N is the number of time steps, At =7/ N, d,, v, and a, are the approximations to u(t,), u(t,) and ii(t,),
respectively in which t , =nAt, F, =F(t,), and a, ,B and y are free parameters which govern the stability
and numerical dissipation of the algorithm. If a =0 this family of algorithms reduces to the Newmark
family. In this case if y =4 the algorithms possess no numerical dissipation (in a sense made precise later on)
whereas if y >$ numerical dissipation is present; if /3 2 &(y +&)2 the algorithm in question is unconditionally
stable. Elaboration on these points and further properties of the Newmark family of algorithms may be
found in Reference 3.
To analyse systems such as (l), or equivalently (3a), it is convenient to invoke the property of
orthogonality of the eigenvectors and reduce down to a single degree-of-freedom. Employing the obvious
notations, the single degree-of-freedom analogues of (1) and (3a)-(3c) are
MU+ KU =F (4)
Ma, +1+( 1+~) Kdn+1- ddn =F,+1, n{O, l ,...,N-1} (54
do =d
v, =v
UO =M -l( Fo - Kd,)
Dissipative and dispersive characteristics of the above algorithm can be evaluated in terms of the solution
it generates to simple pilot problems in which F= 0. In these cases (5a)-(5c) can be succinctly written in the
recursive form
where
Xn+l =Ax,, nE{O, l , ..., N- l } (6a)
(6b)
X , =(d,, Atv,, At 2 a,)T
and A is called the amplification matrix. Stability and accuracy of an algorithm depend upon the eigenvalues
of the amplification matrix. The characteristic equation for A is
det(A-hI) =h3- 2Al h2+A, h- A, =0
(7)
where I is the identity matrix, h denotes an eigenvalue and
A, =4 trace A
A, =sum of principal minors of A
A, =determinant A
are invariants of A.
The spectral radius p =max{ I A,, 1 I A,l ,I A31} where A, , A, and A, are the eigenvalues of A.
The velocities and accelerations may be eliminated by repeated use of (6a) to obtain a difference equation
in terms of the displacements:
d,+l-2A1d,+A2d,_1-A3d,_2 =0, n { 2 , 3 , ... , N- l } (9)
Comparison of (9) with (7) indicates that the discrete solution has the representation
where the c i s are determined from the initial data.
286 H. M. HILBER, T. J. R. HUGHES AND R. L. TAYLOR
The explicit definition of A for the family of algorithms defined by ( 5) is
in which
1 +apR2 1 s-B
A = I [ D -yQ2 1-(1+a)(y-/3)Q2 1-y-(1+a)(ay-B)Q2
- R2 - (1 +a) Q2
- (1 +4 (4-B) Q2
D =1 +(1 +a)BQ2
IR =wAt
w =(K/M)*
Explicit forms corresponding to (8) and (9), respectively, can be computed from (1 1):
= O
At2
1
A, =1 -Q2[(1 +a) ( 3/ +&) - a/3]/2D
A, =l-IR2[3/-4+2a(y-fl)]/D
A3 =aQ2@ - y +$)/ D
and
dn+1-2dn+dn-, wQ(aSy-4) dn-dn-l u2 dn - 24-1 +dn-2
+5 d, - A3
At2 + D At
Example
Consider the case in which /3 =0 and y =4. For these values (12) and (13) become, respectively:
A,= 1-(1+a)Q2/2
A, =1-aR2
A, =0
(14)
and
Since A, =0 there are only two non-trivial eigenvalues of A. Thus the solution of (15) has the form
where
If A: <A, the eigenvalues are complex conjugate and (16a) can be written
Al,, =A, 5 (A; - A,)*
d, =pn(do cos Gt , +c sin 8t n)
where
p = A g
G =a / At
I2 =arctan - I ) *
c =(dl-A1do)/(A,-A:)*
It is clear from (17) and (18) that the requirement for stable oscillatory response is A: <A, <1 or, equivalently,
Q <2/(1 +a) and 0 <a. With 01 =0 this algorithm becomes the familiar central difference method which is
non-dissipative, i.e. p =1. For positive values of a the algorithm is dissipative; the algorithm with a =+
has been used successfully in the finite difference work of Aboudi5 on elastic shock-wave propagation.
TIME INTEGRATION ALGORITHMS IN STRUCTURAL DYNAMICS 287
In general, A, #0 for the family of algorithms defined by ( 5) and therefore the amplification matrix has
three non-zero eigenvalues. In this case wedefine stability in terms of spectral stability which requires that
p <1 and double roots (eigenvalues of multiplicity two) satisfy I A\ < 1 ; see GeaP for further details. If the
algorithm in question is spectrally stable for all R >0, it is said to beunconditionally stable.
It is a standard exercise to show that the algorithms defined by ( 5) are conuergent, i.e. for I , fixed and
n =t,/At, d,-+u(t,) as At-tO. A consequence of convergence is that there exists an Q,>O such that if
O<R<Q, then (7) has two complex conjugate roots XI , , , principal roots, and a so-called spurious root A,,
which satisfy I A, I <I A,,, I <1. Under these circumstances the principal roots of (7) are
Al,, =A Bi =exp [a(- z 5 i)] (19)
and the solution of (9) may be written in the form
where
d,, =[exp (- zGt,)] (c, cos Gt, +c, sin 6t n) + c3 A?J
I
6 =Q A t
(20b)
-
( =-In (A2 +B2)/2Q
Q =arctan @/ A)
and the ti's are defined by the initial data.
As measures of the numerical dissipation and dispersion weconsider the algorithmic damping ratio z and
relative period error (T- T) / T, respectively, where T =2r/w and T =246. Note that from (20b), both z and
T are defined in terms of the principal roots. Thus these measures of accuracy are defined only for values
of R such that 0 <R <R,. Outside this region accuracy is not an issue and weare concerned only about
stability; here p is the most important quantity since it provides information about stability and
concomitantly about dissipation.
For completeness we note that the logarithmic decrement 8 =In [d(t,)/d(t, +r ) ] and amplitude decay
function AD =1 - d(t, +T)/d(t,) are also commonly used measures of algorithmic dissipation. Either of
these measures determines the other as AD =1 -exp(-8). As is clear from their definitions, AD and S can
only be determined from the discrete solution of an initial-value problem, see Reference 2. This entails post-
processing involving approximate interpolation to ascertain consecutive peak values. Since 4 is defined in
terms of the principal roots, it seems to be the preferable measure of dissipation. For small time steps all
three measures are equivalent for practical purposes. This can be seen as follows. First of all, as At/T+O,
8+0; therefore for sufficiently small At/T, the definition of AD implies that ADw8. Furthermore, for
convergent algorithms the effects of the spurious roots vanish in the limit At/T+O. Thus neglecting As in
(20a) yields 8 ~237s.
The period of the discrete solution T can also be determined analytically from (20b), rather than by solving
initial-value problems and approximately ascertaining consecutive peak values.
In the sequel we shall show that the dissipation incurred by positive values of or is not too effective. Its
qualitative behaviour is the same as that of linear viscous damping; see Hilber. However, by appropriately
combining negative or-dissipation with particular values of j? and y a one-parameter family of algorithms with
the attributes enumerated in the Introduction can be constructed. Specifically, wetake j? =(1 - or)2/4 and
y =3- a. Then the invariants of the amplification matrix become
(21)
(22)
1
A, =1 -R2/2D+A3/2
A, =1 +2A3
A, =a( 1 +a)2 R2/4D
where D =1 +(1 -or) (1 - a,) R2/4. Substituting (21) into (7) yields
(A-AJ (A-1)2+R2A2/D =0
In the limit R-t.0, A,,,-+ 1 and >,-to. (In the Wilson and Houbolt algorithms A3 does not vanish in this limit.
288 H. M. HILBER, T. J. R. HUGKES AND R. L. TAYLOR
The significance of this fact does not seem to be well understood.) On the other hand, in the limit SZ +. co, for
fixed a #1, (22) becomes
The roots of (23) are real and are depicted in Figure 1 as functions of a. This figure indicates that the
proposed algorithm is stable in the limit At/T-+ co whenever - +<a <0. It is clear from Figure 1 that decreas-
ing a below - increases the spectral radius. Moreover, it was found by numerical experimentation that for
small Ar/T, 5 cannot be increased by reducing a below -5. Thus weconclude that the range of practical
interest is - 5 <a <0.
[( 1 - a) (1 - a2) h- a(l + a)2] (X- 1)'+4h2 =0
(23)
a
Figure 1. Eigenvalues of the amplification matrix in the limit h / T + w us a
COMPARISON OF DISSIPATIVE ALGORITHMS
Spectral radius is an important measure of stability and dissipation. Figure 2 illustrates the behaviour of
spectral radii vs At/T for the following algorithms*
(a) Trapezoidal rule (a =0, @ =0-25, y =0.5).
(b) Trapezoidal rule with a-damping (a =0.1, p =0.25, y =0-5).
(c) Newmark method with y-damping (a =0, @ =0.3025, y =0.6).
(d) A member of the new family proposed here (a =- 0.1, p =0.3025, y =0.6).
The spectral radii for cases (c) and (d) are strictly less than one as At/T-+co. This condition ensures that the
response of higher modes is damped-out. The results for case (b) indicate why a-damping, in itself, is not an
effective dissipative mechanism. For large At/T, cases (c) and (d) are identical. However, for small At/T,
the spectral radius for case (d) is closer to one for a larger range of At/T. This is due to the addition
of negative a-dissipation. In fact, it was the observation that combining cases (b) and (c) would produce an
improved spectral radius graph which led to the present scheme.
For comparison purposes wehave plotted in Figure 3 the spectral radii of various schemes us At/T. The
strong dissipation possessed by the Houbolt and Wilson methods is clearly evident. The superiority of the
dissipative characteristics of the present scheme over those of the Wilson method can be seen from Figure 3.
Consider the case a =-0.3; for small At/T the new algorithm has a spectral radius curve closer to one than
In all cases ,!? =(y+ &)2/4 which ensures unconditional stability.
TIME INTEGRATION ALGORITHMS IN STRUCTURAL DYNAMICS
I .o
0.9
0.8
0.7
p 0.6
0.5
0.4
0.3
0. 2.
289
-
- WILSON METHOD ( 8 1.4)
-
-
-
-
- HOUBOLT METHOD
I
0.7
1 I I I
lo-' I 10 toe Id
At / T
Id2
Figure 2. Spectral radii us AtlT for new method and Newmark schemes
does the Wilson method indicating that it is more accurate in the lower modes yet for large At/T the
dissipation is stronger.
The point at which the spectral radius attains its minimum in Wilson's method (AtlTw3) marks the
bifurcation of the complex conjugate principal roots into real roots.
In Figure 4 the damping ratios us At/T are plotted for cases (a), (b), (c) and (d). Desirable properties for an
algorithmic damping ratio graph to possess are a zero tangent at the origin and subsequently a controlled
12
290 H. M. HILBER, T. J. R. HUGHES AND R. L. TAYLOR
0.06 -
NEWMARK METHOD
( a o , P= 0.3025,y=0.61
0.05 -
TRAPEZOIDAL RULE WITH O-DISSIPATION
( a - o.i,P =0. 25, y 80. 5)
0.03 -
NEW ALGORITHM
( a=- o. i , P=0. 3025, y . 0.6)
TRAPEZOIDAL RULE
( a =0 . P 0 . 2 5 , ~ * 0. 5)
0 0. I 0.2 htIT 0.3 0.4 0.5
Figure 4. Damping ratios us At/T for new method and Newmark schemes
4
At /T
Figure 5. Damping ratios us At/T for new methods and Houbolt and Wilson schemes
TIME INTEGRATION ALGORITHMS IN STRUCTURAL DYNAMICS 29 1
turn upward. This ensures adequate dissipation in the higher modes and at the same time guarantees that the
lower modes are not affected too strongly. Notice that for case (c) the dissipation ratio curve has positive
slope at the origin. This is typical for Newmark y-damping and is the reason why the Newmark family is felt
to possess ineffective numerical dissipation. Case (b) also possesses this property and, in addition, turns
downward as At/T increases, which further emphasizes the ineffectiveness of a-dissipation. On the other
hand, the dissipation ratio for case (d) has a zero slope at the origin and then turns upward.
In Figure 5 damping ratios for various values of a in the present scheme are compared with those for the
Wilson and Houbolt methods. The continuous control of numerical dissipation possible in the present scheme
is evident and the graphs show that the proposed family of algorithms possesses the desirable numerical
dissipation characteristics cited previously.
Finally, in Figure 6 the relative period error is plotted us At/T for the various cases.
" 0 0. I 0.2 0.3 0.4
At /T
Figure 6. Relative period error us At/T for new methods and Houbolt and Wilson schemes
CONCLUSIONS
A new family of unconditionally stable one-step algorithms for structural dynamics has been developed
which possesses improved algorithmic damping properties that can becontinuously controlled. In particular,
it is possible to achieve zero damping. It is shown that there are members of the new family which are more
accurate in the lower modes than the Wilson method, yet are more strongly dissipative in the higher modes.
The new methods involve commensurate storage when compared with the Newmark and Wilson methods,
and are no more difficult to implement.
ACKNOWLEDGEMENT
Wewould like to acknowledge the support for this work provided by the United States Energy Research
and Development Administration through the Lawrence Berkeley Laboratory.
292 H. M. HILBER, T. J. R. HUGHES AND R. L. TAYLOR
REFERENCES
1. J. C. Houbolt, A recurrence matrix solution for the dynamic response of elastic aircraft. J. Aeronaut. Sci. 17. 540-550
(1950).
1. 283-291 (1973).
2. K. J. Bathe and E. L. Wilson, Stability and accuracy analysis of direct integration methods, Earthq. Engng Struct. Dyn.
3. 6. L. Goudreau-and R. L. Taylor, Evaluation of numerical methods in elastodynamics, Comp. Meth. Appl. Mech.
4. K. C. Park, Evaluating time integration methods for non-linear dynamic analysis, ASME-AMD, 14,35-58.
5. J. Aboudi, Two-dimensional wave propagation in a nonlinear elastic half-space, J. Comp. Meth. Appl. Mech. Eng.
6. C. W. Gear, Numerical Initial Value Problems in Ordinary Differential Equations, Prentice-Hall, Englewood Cliffs, N.J.,
7. H. M. Hilber, Analysis and design of numerical integration methods in structural dynamics, Ph.D. Thesis, Univ. of
8. R. D. Krieg and S. W. Key, Transient shell response by numerical time integration, Int. J. num. Meth. Engng, 7,273-287
Eng. 2, 69-97 (1973).
9, 47-64 (1976).
1971.
California, Berkeley, November, 1976.
(1973).
Note added in proof. Since this paper was accepted for publication, Dr. S. W. Key of Sandia Laboratories, Albuquerque,
New Mexico, kindly pointed out to the authors that the displacement difference equation corresponding to the algorithm
of (3) falls within a very general four-parameter class of three-step displacement difference equations considered in
Reference 8.

You might also like