Pre Print

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Krylov methods to reduce large structural mechanical FEM

models of machine tools


Andreas Soppa
TU Braunschweig
Institut Computational Mathematics
Pockelstr. 14
38106 Braunschweig
Abstract
In model order reduction of Finite Element (FE) models the most popular method
is the modal reduction. In the last years new reduction methods to reduce large
and sparse dynamical systems were presented. The two most important types are
the Balanced Truncation Approximation (BTA) and the Krylov subspace methods.
Here the reduction with Krylov subspace methods of the special systems, generated
by the FE process, shall be analyzed. For that Krylov subspace methods to reduce
state space systems and systems of second order based on Pade-via-Lanczos and
Rational-Interpolation were tested.
1 Introduction
Nowadays modern machine tools consists of exible mechanical structures, which are
embedded in a control loop. To achieve the features of a machine tool simulation tech-
nics are used.
To simulate the properties of the mechanical structure FE methods are common. The
drawback of FE methods is that they results in very large and sparse linear dynami-
cal systems, so the computation of the simulation takes too much time and memory
space. To calculate the results in an acceptable time here usually the modal reduction
is common. This method is based on the eigenvalue decomposition and it has the dis-
advantage of a huge amount of computation time and space when the dimension of the
model increases.
In the last years new reduction methods to reduce large and sparse dynamical systems
were presented. The two most important are the Balanced Truncation Approximations
(BTA) and the Krylov subspace methods. Here the reduction of the special systems,
generated by the FE process, with Krylov subspace methods are analyzed.
1
2 THE FE MODEL 2
2 The FE model
To construct and design machine tools CAD-environments like Nastran
c
are used. A
CAD-environment includes a preprocessor for CAD design, a solver for computing the
structural mechanical properties, and a postprocessor to redesign the model. To nd an
optimal parameter setting for the control loop of a designed machine tool simulations
with its FE model are run. For that simulation environments like Simulink
c
are used.
The preprocessor output of the CAD-environment is a mass matrix M R
nn
and a
stiness matrix K R
nn
of the designed FE model. For the problem at hand it is
common to create a damping matrix in the form:
D = M + K
called Rayleigh-damping, where and are parameters which are chosen by the ex-
perience of the design engineer and lie between 0 and 0.1.
With these three matrices the FE model of second order, which describes the dynamic
behavior of the designed machine tool looks like:
M x(t) +D x(t) +Kx(t) = Bu(t) (1)
y(t) = C
T
v
x(t) +C
T
p
x(t) (2)
where B R
np
is called the input matrix and C
T
v
, C
T
p
R
mn
are called the output
matrices for velocity and position. The matrix C
T
v
contains the output vectors which
describe the velocity response caused by the input impulse. The matrix C
T
p
contains
the output vectors which describe the position response caused by the input impulse.
The vector x(t) R
n
is called the state vector, u(t) R
p
is called the input or impulse
vector and y(t) R
mp
is called the output vector.
To implement the designed model into the simulation environment Simulink
c
the sec-
ond order system has to be transformed into a system of rst order, called state space
model, because Simulink
c
can only handle models in this form. By the transformation
process the dimension of the system doubles.
The second order model (1) and (2) can be rewritten in rst order form by
_
F 0
0 M
_
. .
E
_
x(t)
x(t)
_
. .
z(t)
=
_
0 F
K D
_
. .
A
_
x(t)
x(t)
_
. .
z(t)
+
_
0
B
_
..
G
u(t), (3)
y(t) =
_
C
p
C
v
_
. .
C
T
_
x(t)
x(t)
_
. .
z(t)
, (4)
where F R
nn
is nonsingular and arbitrary, E, A R
2n2n
, G R
2np
, C
T
R
m2n
,
z(t) R
2n
, u(t) R
p
and y(t) R
mp
.
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 3
3 Model Order Reduction for State Space Models
A state space model is given by the system:
E x(t) = Ax(t) +Gu(t) (5)
y(t) = C
T
x(t), (6)
where E, A R
nn
, G R
np
and C
T
R
mn
. The vector x(t) R
n
is called state
vector, the vector u(t) R
p
is called input and y(t) R
mp
is called output.
In structural mechanical FE models a dimension n up to one million is common, so
the system matrices are large-scale and sparse. Because of that the computation of the
response of the system and so the whole simulation is often too costly. So one tries to
nd a reduced system with dimension q n, which is expected to capture the essential
dynamic properties of the original system. With this reduced system the computational
cost during the whole simulation process would be reduced.
The reduced system has the form:

E

x(t) =

A x(t) +

Gu(t) (7)
y(t) =

C
T
x(t), (8)
where

E,

A R
qq
,

G R
qp
and

C
T
R
mq
. The vector x(t) R
q
is the (reduced)
state vector, u(t) R
p
is the input vector and y(t) R
mp
is the reduced output
vector.
The dimension p of the input vector u(t) and the dimension m p of the output y(t)
are unchanged. Only the dimensions of the system matrices are of lower dimension and
the output y(t) is an approximation to the original output y(t).
3.1 Krylov subspace methods
The main idea of Krylov subspace methods is to dene a projection from the of the
original model into a lower dimensional space.
Consider a projection by a nonsingular and orthogonal matrix V as follows:
x = V x,
where V R
nq
, x R
n
, x R
q
and q n. By applying this projection to the
state space system (5),(6) and then multiplying the state equation by the transpose of
a nonsingular orthogonal matrix W R
nq
with W
T
V = , where is a diagonal
matrix. A reduced model of order q can be found by
W
T
EV

x(t) = W
T
AV x(t) +W
T
Gu(t)
y(t) = C
T
V x(t).
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 4
So the reduced system parameters in equation (7),(8) can be dened by the congruence
transformation

E = W
T
EV,

A = W
T
AV,

G = W
T
G,

C
T
= V
T
C.
(9)
To generate the projection matrices W and V bases vectors of Krylov subspaces are
used.
Given a square matrix R
nn
and a vector R
n
, the q-th Krylov sequence
1
(, ,
2
, ,
q1
) (10)
is a sequence of q linear independent column vectors and the space spanned by this
vectors is called the Krylov subspace of order q:
K
q
(, ) = span{v
0
, v
1
, . . . , v
q1
}, (11)
where v
j
=
j
, j = 0, , q 1.
In the following section the question which matrix and which vector we have to
use to generate the Krylov subspace (11).
3.2 Moment matching
Taking the Laplace transform of the impulse response y(t) yields the transfer function
T of the original system:
T (s) = C
T
(sE A)
1
G
. .
X(s)
= C
T
X(s). (12)
Expanding X(s) in a power series about a frequency s C, called expansion point
gives
X(s) =

j=0
X
(j)
( s) (s s)
j
,
where
X
(j)
( s) = [(A sE)
1
E]
j
(A sE)
1
G.
For the transfer function one obtains
T (s) = C
T
_
_

j=0
X
(j)
( s) (s s)
j
_
_
=

j=0
C
T
X
(j)
( s)
. .
T
(j)
( s)
(s s)
j
.
1
The matrices and vectors used to construct a Krylov subspace are noted in Greek letters like it is
common in engineering.
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 5
X
(j)
( s) is called the j-th order input moment of T (s) at s and T
(j)
( s) represents the
j-th order system moment of T (s) at s.
Pade approximation requires that the rst

j output moments of the transfer function

T of the reduced system equal the rst output moments of the original system at s:
T
(j)
( s) =

T
(j)
( s), j = 0, 1, ,

j.
This method is called Pade approximation or moment matching.
An alternative is to use more than one expansion point, this method is called multi-
point Pade approximation or multi-point moment matching. Expanding X(s) in power
series about more than one frequency s
i
C for i = 1, 2, . . . ,

i gives
X(s) =

j=0
X
(j)
(s
1
)(s s
1
)
j
,
X(s) =

j=0
X
(j)
(s
2
)(s s
2
)
j
,
.
.
.
X(s) =

j=0
X
(j)
(s

i
)(s s

i
)
j
,
where
X
(j)
(s
i
) = [(A s
i
E)
1
E]
j
(A s
i
E)
1
G.
And one obtains

i dierent realizations of the transfer function:
T (s) = C
T
_
_

j=0
X
(j)
(s
i
) (s s
i
)
j
_
_
=

j=0
C
T
X
(j)
(s
i
)
. .
T
(j)
(s
i
)
(s s
i
)
j
for i = 1, . . . ,

i.
X
(j)
(s
i
) is called the j-th order input moment of T (s) at s
i
and T
(j)
(s
i
) represents the
j-th order system moment of T (s) at s
i
.
Multi-point Pade approximation (or multi-point moment matching) requires that the
rst

j output moments of the reduced system equals the rst output moments of the
original system at the expansion points s
i
, i = 1, 2, ,

i:
T
(j)
(s
i
) =

T
(j)
(s
i
), j = 0, 1, ,

j.
One can compute the moments X
(j)
(s
i
) explicitly, but this computation is numerical
unstable.
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 6
Instead of that Krylov subspace methods are the methods of choice:
For that in (10) set
i
= (A s
i
E)
1
E and
i
= (A s
i
E)
1
G for i = 1, . . . ,

i to
iteratively generate the linear independent vectors in (11). Then the Krylov subspaces
K
q
(
i
,
i
) are indeed spanned by the system moments X
(j)
(s
i
) for j = 0, 1 ,

j 1 at
expansion point s
i
, they are called input Krylov subspaces.
To iteratively generate a orthonormal basis
V = {v
0
, v
1
, . . . , v

j1
}, (13)
of the Krylov subspace (11), where v
j
=
j
i

i
, j = 0, ,

j 1 the Arnoldi- or the


Lanczos-Algorithm is used.
To iteratively generate a dual Krylov subspace in (10) set

i
= (A s
i
E)
T
E
T
and

i
= (A s
i
E)
T
C for i = 1, . . . ,

i and i = 1, ,

i. This Krylov subspaces are called


output Krylov subspaces.
To iteratively generate a orthonormal basis
W = {w
0
, w
1
, . . . , w

j1
}, (14)
of the Krylov subspace (11), where w
j
=

j
i

i
, j = 0, ,

j 1 and i = 1, ,

i the
Arnoldi- or the Lanczos-Algorithm is used.
The matrices V and W are used to reduce the original system (5),(6) by the congruence
transformation (9). Methods which use both matrices V and W are called two-sided
methods, methods with V = W are called one-sided methods.
If there is only one expansion point s, the Krylov subspace method is called Pade-
via-Lanczos or the Pade-via-Arnoldi method. If there are more than one expansion
point, the Krylov subspace method is called Dual-Rational-Arnoldi method or the
Dual-Rational-Lanczos method.
3.3 Implementation
To reduce the state space systems the following Krylov subspace methods were used:
1. Pade-via-Arnoldi method (PvA) by Odabasioglu and Celik [6].
2. Pade-via-Lanczos method (PvL) by Feldmann and Freund [2].
3. Dual-Rational-Arnoldi method (DRA) by Grimme [4].
4. Dual-Rational-Lanczos method (DRL) by Gallivan, Grimme and Dooren [3].
The methods Pade-via-Arnoldi and Pade-via-Lanczos have only one expansion point.
The methods Dual-Rational-Arnoldi and Dual-Rational-Lanczos deal with more than
one expansion point.
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 7
Algorithm 1 shows the methods with one expansion point. Algorithm 2 shows the im-
plementation of the Arnoldi-Algorithm and Algorithm 3 shows the implementation of
the Lanczos-Algorithm.
The methods are implemented to handle the sparse structure of the system matrices.
To handle systems with one input but more than one output the block forms of the
Krylov-methods were implemented.
Algorithm 1 Pade-via-Arnoldi method / Pade-via-Lanczos method
Input: system matrices E, A, G, C; expansion point s; reduced order r;
deation tolerance detol
Output: reduced system matrices E
r
, A
r
, G
r
, C
r
1: function [E
r
, A
r
, G
r
, C
r
] = PvA(E, A, G, C, s, r, detol)
[ [E
r
, A
r
, G
r
, C
r
] = PvL(E, A, G, C, s, r, detol) ]
2: V = [ ], W = [ ]
3: [L, U] = (AsE)
4: for i = 1 to r do
5: V = Krylov(L, U, E, G, V, W, detol, i)
6: W = Krylov(U
T
, L
T
, E
T
, C
T
, W, V, detol, i)
7: end for
8: if columns(V) = columns(W) then
9: make V and W of the same dimension
10: end if
11: [E
r
, A
r
, G
r
, C
r
] = transform(E, A, G, C, V, W)
12: if the reduced system is unstable reduce dimension of V and W until the system
is stable
Remarks about Algorithm 1:
In line 3 the LU decomposition of the matrix (A sE) is computed eventually with
pivoting.
In line 5 and 6 the matrices V and W are computed by the Arnoldi- or the Lanczos-
Algorithm. When the Arnoldi Algorithm is used the matrices V and W are orthonor-
mal, i.e. V
T
V = I and W
T
W = I. When the Lanczos Algorithm is used the matrices
V and W are biorthonormal, i.e. W
T
V = , where is a diagonal matrix.
In line 9 the dimensions of V and W were made equal: The dimensions of the matrices
may be not equal. The cause are deated vectors or an unequal number of input and
output vectors (p = m). There are two possibilities to make the dimension of V and
W equal:
1. reduce the number of columns of the matrix with higher dimension by truncating
the last column or
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 8
2. compute the Krylov-Algorithm to get additional columns for the matrix with lower
dimension.
In line 11 the congruence transformation with equations (9) is computed.
In step 12 the dimension of the matrices V and W were decreased by truncating the
last columns, until the reduced system is stable.
Algorithm 2 Block Arnoldi method
Input: matrices L,U,E,G,V,W; deation tolerance detol; index i
Output: transformation matrix V
1: function [V] = Block Arnoldi(L,U,E,G,V,W,detol,i)
2: if (i == 1) then
3: R = U\L\G
4: else
5:

V = [V
columns(V)columns(G)+1
V
columns(V)
]
6: R = U\L\(E

V)
7: end if
8: for j = 1 to columns(R) do
9: for k = 1 to columns(

V) do
10: h = (R
j
,

V
k
)
2
11: R
j
= R
j
h

V
k
12: end for
13: h =
_
(R
j
, R
j
)
2
14: if (h > detol) then
15:

V =
1
h
R
j
16: V = [V

V]
17: else
18: Deation
19: end if
20: end for
Remarks about Algorithm 2:
In line 3 a forward substitution LX = G and afterwards a backward substitution
UR = X with the lower triangular matrix L and the upper triangular matrix U of the
LU decomposition of the matrix (AsE) and the input vectors G are computed.
In line 5 and 6 a forward substitution LX =

V and afterwards a backward substitution
UR = X with the lower triangular matrix L and the upper triangular matrix U of the
LU decomposition of the matrix (A sE) and the matrix E multiplied with the last
columns(G) vectors of V are computed.
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 9
In line 9 to 12 the vectors are orthogonalized.
In line 13 to 18 the vectors are normalized.
In line 18 vectors R
j
with h < detol are deated, that means they are not used to
generate the matrix V.
Algorithm 3 Block Lanczos method
Input: matrices L,U,E,G,V,W; deation tolerance detol; index i
Output: transformation matrix V
1: function [V] = Block Lanczos(L,U,E,G,V,W,r,detol,i)
2: if (i == 1) then
3: R = U\L\G
4: else
5:

V = [V
columns(V)columns(G)+1
V
columns(V)
]
6: R = U\L\(E

V)
7: end if
8: for j = 1 to columns(R) do
9: for k = (columns(

V) 2) to columns(

V) do
10: h = (R
j
, W
k
)
2
11: R
j
= R
j
h

V
k
12: end for
13: h =
_
(R
j
, W
j
)
2
14: if (h > detol) then
15:

V =
1
h
R
j
16: V = [V

V]
17: else
18: Deation
19: end if
20: end for
Remarks about Algorithm 3:
In line 3 a forward substitution LX = G and afterwards a backward substitution
UR = X with the lower triangular matrix L and the upper triangular matrix U of the
LU decomposition of the matrix (AsE) and the input vectors G are computed.
In line 5 and 6 a forward substitution LX =

V and afterwards a backward substitution
UR = X with the lower triangular matrix L and the upper triangular matrix U of the
LU decomposition of the matrix (A sE) and the matrix E multiplied with the last
columns(G) vectors of V are computed.
In line 9 to 12 the vectors are orthogonalized.
3 MODEL ORDER REDUCTION FOR STATE SPACE MODELS 10
In line 13 to 18 the vectors are normalized.
In line 18 vectors R
j
with h < detol are deated, that means they are not used to
generate the matrix V.
Algorithm 4 shows the methods with more than one expansion points:
Algorithm 4 Dual-Rational-Arnoldi method / Dual-Rational-Lanczos method
Input: system matrices E, A, G, C; expansion points s
k
, k = 1, . . . , K ; tolerance
tol ; vector with number of expansions J
b
, J
c
for every s
k
; deation tolerance
detol
Output: reduced system matrices E
r
, A
r
, G
r
, C
r
1: function [E
r
, A
r
, G
r
, C
r
] = DRA(E, A, G, C, s, J
b
, detol)
[ [E
r
, A
r
, G
r
, C
r
] = DRL(E, A, G, C, s, J
c
, detol) ]
2: while relative change in one s
k
> tol do
3: V = [ ], W = [ ]
4: for k = 1 to K do
5: [L, U] = (As
k
E)
6: for i = 1 to min([J
b
]
k
, [J
c
]
k
) do
7: V = Krylov(L, U, E, G, , V, W, detol, i)
8: W = Krylov(U
T
, L
T
, E
T
, C
T
, , W, V, detol, i)
9: end for
10: if ( columns(V) = columns(W)) then
11: make V and W of the same dimension
12: end if
13: end for
14: E
r
= W
T
EV, A
r
= W
T
AV
15: assign s
k

k
(E
r
, A
r
) for k = 1, . . . , K
16: end while
17: [E
r
, A
r
, G
r
, C
r
] = transform(E, A, G, C, V, W)
18: if the reduced system is unstable reduce dimension of V and W
Remarks about Algorithm 4:
Input: s
k
k = 1, . . . , K are the expansion points. J
b
, J
c
are column vectors with the
number of expansions computed at expansion point s
k
in row k.
In line 5 the LU decomposition of the matrix (A s
k
E) eventually with pivoting is
computed.
In line 7 and 8 the matrices V and W are computed by the Arnoldi- or the Lanczos-
Algorithm. When the Arnoldi Algorithm is used the matrices V and W are orthonor-
mal, i.e. V
T
V = I and W
T
W = I. When the Lanczos Algorithm is used the matrices
4 MODEL ORDER REDUCTION FOR SECOND ORDER MODELS 11
V and W are biorthonormal, i.e. W
T
V = , where is a diagonal matrix.
In line 11 the dimensions of V and W were made equal: The dimensions of the matrices
may be not equal. The cause are deated vectors or an unequal number of input and
output vectors (p = m). There are two possibilities to make the dimension of V and
W equal:
1. reduce the number of columns of the matrix with higher dimension by truncating
the last column or
2. compute the Krylov-Algorithm to get additional columns for the matrix with lower
dimension.
In line 14 the congruence transformation with equations (9) for E and A is computed.
In line 15 for the next iteration new expansion points s
k
are chosen from the eigenvalues
of (E
r
, A
r
) computed by a generalized eigenproblem to achieve a better approximation
of the original system.
In line 17 the congruence transformation with equations (9) is computed.
In step 18 the dimension of the matrices V and W were reduced by truncating the last
column, until the reduced system is stable.
4 Model Order Reduction for Second Order Models
In the last section a second order system was rst transformed into a rst order system
and afterwards reduced. An alternative is to reduce the second order system before
transforming it into a rst order system. The advantage is that we have to reduce
systems with dimension half the amount than the corresponding rst order systems. A
second order model is given by a second order system (1) and (2).
The system is large-scale and sparse and the model order reduction problem is to nd
a q-order system with q n:

x(t) +

D

x(t) +

K x(t) =

Bu(t) (15)
y(t) =

C
T
v

x(t) +

C
T
p
x(t), (16)
where

M,

D,

K R
qq
,

B R
qp
and

C
T
v
,

C
T
p
R
mq
. The vector x(t) R
q
is the
(reduced) state vector, u(t) R
p
is the input and y(t) R
mp
is the output.
The reduced-order system is expected to capture the essential dynamic behaviors of the
original second order system and to reduce the computational cost during the whole
simulation process.
4 MODEL ORDER REDUCTION FOR SECOND ORDER MODELS 12
4.1 Second order systems with proportional damping
Consider the second order system with proportional damping:
M x(t) + (M +K) x(t) +Kx(t) = Bu(t) (17)
y(t) = C
T
v
x(t) +C
T
p
x(t), (18)
where M, K R
nn
, B R
np
, C
T
v
, C
T
p
R
mn
and , are proportional damping
coecients with , > 0 and < 1. Let z(t) = [x
T
(t) x
T
(t)]
T
. Then a generalized
state space realization is given by
E z(t) = Az(t) +Bu(t) (19)
y(t) = C
T
z(t) (20)
where
E =
_
I 0
0 M
_
, A =
_
0 I
K (M K)
_
, (21)
B =
_
0
B
_
, and C =
_
C
p
C
v
_
, (22)
where E, A R
2n2n
, B R
2np
and C
T
R
m2n
. The vector x(t) R
n
is called state
vector, the vector u(t) R
p
is called input and y(t) R
mp
is called output.
The transfer function of this system is given by
T (s) = C
T
(sE A)
1
B = (C
T
p
+sC
T
v
)(s
2
M +s(M +K) +K)
1
B.
To obtain a reduced-order model that matches the moments of the original model in
state space form (19),(20) one can use the methods for rst order systems by transform-
ing the second order system to a rst order system, reducing the system and transform
it back.
However, this methods might destroy the original second-order system structure. That
means it will not always be possible to obtain a reduced-order system corresponding
to a second order system of the form (17),(18), because it is no longer possible to use
technics for large scale and sparse matrices and so the computation may be too costly.
Even when this is possible, one cannot guarantee that properties such as positive de-
nite reduced-order mass and stiness matrices will be preserved.
Analog to the state space methods the goal is to generate a congruence transformation
as in (9).
For the rst order system (19)-(22) the associated Krylov subspace is
2
V

j
= span{, ,
2
, ,

j1
}, (23)
where = (A sE)
1
E and = (A sE)
1
B at expansion point s.
Now one can apply the methods for state space systems described in the last section.
2
The matrices and vectors used to construct a Krylov subspace are noted in Greek letters like it is
common in engineering.
4 MODEL ORDER REDUCTION FOR SECOND ORDER MODELS 13
Alternatively it is possible to reduce the second-order system without transforming it
into a rst order system.
Taking the Laplace transform of the impulse response y(t) of the second order system
yields the transfer function T (s) of the original second order system with proportional
damping at frequency s:
T (s) = C
T
(s
2
M +s(M +K) +K)
1
B
. .
X(s)
= C
T
X(s).
Expanding X(s) in power series about various frequencies s
i
C for i = 1, 2, . . . ,

i gives
X(s) =

j=0
X
(j)
(s
1
)(s s
1
)
j
,
X(s) =

j=0
X
(j)
(s
2
)(s s
2
)
j
,
.
.
.
X(s) =

j=0
X
(j)
(s

i
)(s s

i
)
j
,
where
X
(j)
(s
i
) = [(s
2
i
M +s
i
(M +K) +K)
1
M]
j
[(s
2
i
M +s
i
(M +K) +K)]
1
B.
One obtains

i dierent realizations of the transfer function
T (s) = C
T
_
_

j=0
X
(j)
(s
i
) (s s
i
)
j
_
_
=

j=0
C
T
X
(j)
(s
i
)
. .
T
(j)
(s
i
)
(s s
i
)
j
for i = 1, . . . ,

i.
X
(j)
(s
i
) is called the j-th order input moment of T (s) and T
(j)
(s
i
) represents the j-th
order system moment of T (s) at s
i
.
Analog to the state space methods one can construct Krylov subspaces by the se-
quences
3
(
i
,
i

i
, ,

j1
i

i
),
where
i
= (s
2
i
M+s
i
(M+K)+K)
1
M and
i
= (s
2
i
M+s
i
(M+K)+K)
1
B,
for i = 1, . . . ,

i.
The sequence of

j linear independent column vectors and the spaces spanned by this
vectors are called the Krylov subspaces of order

j:
K

j
(, ) = span{v
0
, v
1
, . . . , v

j1
}, (24)
3
The matrices and vectors used to construct a Krylov subspace are noted in Greek letters like it is
common in engineering.
4 MODEL ORDER REDUCTION FOR SECOND ORDER MODELS 14
where v
j
=
j
, j = 0, ,

j 1. In [7] it is shown that for the rst-order system


(19)-(22) the associated Krylov subspace
V

j
= span{, ,
2
, ,

j1
},
may be decomposed as
V

j
K

j
K

j
.
Hence, one can directly reduce the second order system.
The Krylov subspaces K

j
(
i
,
i
) are indeed spanned by the system moments X
(j)
(s
i
)
for j = 0, 1 ,

j 1 at expansion point s
i
, they are called input Krylov subspaces.
To iteratively generate a orthonormal basis
V = {v
0
, v
1
, . . . , v

j1
}, (25)
where v
j
=
j
i

i
, j = 0, ,

j 1 and i = 1, ,

i the Arnoldi- or the Lanczos-


Algorithm is used.
To generate a dual Krylov subspace in (10) set

i
= (s
2
i
M+s
i
(M+K)+K)
T
M
T
and
i
= (s
2
i
M +s
i
(M +K) +K)
T
(C
p
+s
i
C
v
). This Krylov subspaces are called
output Krylov subspaces.
To iteratively generate a orthonormal basis
W = {w
0
, w
1
, . . . , w

j1
}, (26)
where w
j
=

j
i

i
, j = 0, ,

j 1 and i = 1, ,

i the Arnoldi- or the Lanczos-


Algorithm is used.
The matrices V and W are used to reduce the original system (15),(16) by the congru-
ence transformation

M = W
T
MV,

K = W
T
KV,

B = W
T
B,

C
T
p
= V
T
C
p

C
T
v
= V
T
C
v
.
(27)
Methods which use both matrices V and W are called two-sided methods, methods
with V = W are called one-sided methods.
If there is only one expansion point s, the Krylov subspace method is called Pade-via-
Lanczos- or the Pade-via-Arnoldi-method for second order systems. If there are more
than one expansion point, the Krylov subspace method is called Dual-Rational-Arnoldi
method or the Dual-Rational-Lanczos method.
4 MODEL ORDER REDUCTION FOR SECOND ORDER MODELS 15
4.2 Implementation
To reduce the second order system the following Krylov subspace methods were used:
1. Dual Rational Arnoldi for systems with proportional damping (DRA PD) by
Gugercin [7]
2. Dual Rational Lanczos for systems with proportional damping (DRL PD)
These methods have more than one expansion point and exploit the special structure
of the proportional damping matrix. Algorithm 5 shows the methods for second-order
systems.
Algorithm 5 Dual-Rational-Arnoldi method / Dual-Rational-Lanczos method
Input: system matrices M, D, K, B, C; expansion points s
k
k = 1, . . . , K; tolerance
tol ; vector with numbers of expansions J
b
, J
c
for every s
k
; deation tolerance
detol
Output: reduced system matrices of rst order E
r
, A
r
, B
r
, C
r
1: function [E
r
, A
r
, B
r
, C
r
] = DRA PD(M, D, K, B, C, s, r, detol)
[ [E
r
, A
r
, B
r
, C
r
] = DRL PD(M, D, K, B, C, s, r, detol) ]
2: while relative change in one s
k
> tol do
3: V = [ ], W = [ ]
4: for k = 1 to K do
5: [L, U] = (s
2
k
M+s
k
D+K)
6: for i = 1 to min([J
b
]
k
, [J
c
]
k
) do
7: V = Krylov(L, U, M, B, V, W, detol, i)
8: W = Krylov(U
T
, L
T
, M
T
, C
T
, W, V, detol, i)
9: end for
10: if ( columns(V) = columns(W)) then
11: make V and W of the same dimension
12: end if
13: end for
14: [M
r
, D
r
, K
r
, B
r
, C
r
]= transform(M, D, K, B, C, V, W)
15: [E
r
, A
r
, B
r
, C
r
] = trans to 1o(M
r
, D
r
, K
r
, B
r
, C
r
)
16: assign s
k

k
(E
r
, A
r
) for k = 1, . . . , K
17: end while
18: [M
r
, D
r
, K
r
, B
r
, C
r
]= transform(M, D, K, B, C, V, W)
19: [E
r
, A
r
, B
r
, C
r
] = trans to 1o(M
r
, D
r
, K
r
, B
r
, C
r
)
20: if the reduced system is unstable reduce dimension of V and W
5 NUMERICAL RESULTS 16
Remarks about Algorithm 5:
Input: s
k
, k = 1, . . . , K are the expansion points. J
b
and J
c
are column vectors with
the number of expansions computed at expansion point s
k
in row k.
In line 5 the LU decomposition of the matrix (s
2
k
M+s
k
D+K) eventually with piv-
oting is computed.
In line 7 and 8 the matrices V and W are computed successive by the Arnoldi- or the
Lanczos-Algorithm. When the Arnoldi Algorithm is used the matrices V and W are
orthonormal, i.e. V
T
V = I and W
T
W = I. When the Lanczos Algorithm is used the
matrices V and W are biorthonormal, i.e. W
T
V = , where is a diagonal matrix.
In line 11 the dimensions of V and W were made equal: The dimensions of the matrices
may be not equal. The cause are deated vectors or an unequal number of input and
output vectors (p = m). There are two possibilities to make the dimension of V and
W equal:
1. reduce the number of columns of the matrix with higher dimension by truncating
the last column or
2. compute the Krylov-Algorithm to get additional columns for the matrix with lower
dimension.
In line 14 the congruence transformation with equations (27) is computed.
In line 15 the system is transformed to a rst order system via equations (21) and (22).
In line 16 for the next iteration new expansion points s
k
are chosen of (E
r
, A
r
) com-
puted by a generalized eigenproblem.
In line 18 the congruence transformation with equations (27) is computed.
In line 19 the system is transformed to a rst order system via equations (21) and (22).
In step 20 the dimension of the matrices V and W were reduced by truncating the last
column, until the reduced system is stable.
5 Numerical Results
5.1 The structural mechanical FE-model
To test the dierent methods, rst a part of a machine tool was designed using the
CAD-environment Nastran
c
.
It is a single feed drive with ball screw drive and linear guide. The structure of the
part of the machine tool is shown in Figure 1. The structure is embedded in a cascade
5 NUMERICAL RESULTS 17
position control loop. As input of the structure the torque at the motor spindle was
dened. The angle and the velocity of the motor spindle and the position and the
velocity of the slide were dened as outputs.
The test model is of order n = 738, i.e. it has 738 degrees of freedom (DOFs), it
Figure 1: FE structure of the machine part: top: geometry of the test model, bottom: without
the geometry.
has one input (p = 1) and four outputs (m = 4). The structures of the mass matrix
and the stiness matrix is shown in Figure 2. The parameters to create the Rayleigh
damping matrix were chosen as = 0.02 and = /100.
In Figure 3 the transfer function from 0 Hz to 1000 Hz and in Figure 4 the time
response from t = 0 s to t = 1 s of the system are shown, the used impulse was a sinus
wave with u(t) = sin (4t).
5.1.1 Modal reduced models
In engineering the modal reduction is widely used. This method is based on the com-
putation of the eigenfrequencies, called modes. For that a eigenvalue decomposition
of the system is calculated. After that only the eigenvalues, which are most interest-
ing for the engineer and their corresponding eigenvectors are used to built a reduced
system. For the problem at hand the frequencies between 0 Hz and 1000 Hz are most
important. For that the eigenvectors are used to construct a transformation matrix V .
The matrix V is orthogonal, so the spectrum of eigenvalues in the frequency interval of
5 NUMERICAL RESULTS 18
0 100 200 300 400 500 600 700
0
100
200
300
400
500
600
700
nonzero elements = 286
Structure of mass matrix M
0 100 200 300 400 500 600 700
0
100
200
300
400
500
600
700
nonzero elements = 4006
Structure of stiffness matrix K
Figure 2: Structures of the mass matrix and the stiness matrix of the test model with 738
DOFs.
0 500 1000
10
12
10
11
10
10
10
9
10
8
10
7
10
6
10
5
10
4
Frequency [Hz]
M
a
g
n
it
u
d
e

[
d
B
]
Input: 1 to Output: 1
0 500 1000
10
14
10
13
10
12
10
11
10
10
10
9
10
8
10
7
10
6
10
5
10
4
Frequency [Hz]
M
a
g
n
it
u
d
e

[
d
B
]
Input: 1 to Output: 2
0 500 1000
10
9
10
8
10
7
10
6
Frequency [Hz]
M
a
g
n
it
u
d
e

[
d
B
]
Input: 1 to Output: 3
0 500 1000
10
10
10
9
10
8
10
7
10
6
10
5
Frequency [Hz]
M
a
g
n
it
u
d
e

[
d
B
]
Input: 1 to Output: 4
Figure 3: Transfer function of the system.
interest remain unchanged. The reduced system parameters in equation (15),(16) can
be dened by the congruence transformation:

M = V
T
MV,

K = V
T
KV,

B = V
T
B,

C
T
= V
T
C.
(28)
To compare the Krylov subspace reduced systems with a modal reduced system a modal
reduced system of the 60 lowest eigenfrequencies was used, which results in a reduced
system of order 60. To test the reduced systems the transfer function and the time
response of the reduced and the original system were compared.
The algorithms were implemented in Matlab version 7.1 (R14).
The computation was done on a AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
and 2 GB RAM.
5 NUMERICAL RESULTS 19
0 0.5 1
0
0.5
1
1.5
x 10
6
Time [s]
y

[
m
]
Step Response: Input: 1 to Output: 1
0 0.5 1
7
6
5
4
3
2
1
0
x 10
6
Time [s]
y
[
m
]
Step Response: Input: 1 to Output: 2
0 0.5 1
1
0.5
0
0.5
1
1.5
2
2.5
3
3.5
x 10
6
Time [s]
y
[
m
]
Step Response: Input: 1 to Output: 3
0 0.5 1
20
15
10
5
0
5
x 10
6
Time [s]
y
[
m
]
Step Response: Input: 1 to Output: 4
Figure 4: Time response of the system.
5.2 Error Norms
To test the reduced systems the following errors were used:
To measure the error of the impulse response of the reduced system of the input
to a single output the errors:

t,rel
(t) =
|y(t) y
r
(t)|
|y(t)|
,
t,abs
(t) = |y(t) y
r
(t)|,
are used. Here
t,rel
(t) is the relative error,
t,abs
(t) is the absolute error, y(t)
is the response of the original system and y
r
(t) is the response of the reduced
system at time t.
To measure the relative error of the reduced system between the input to a single
output of the transfer functions the error:

f,rel
(f) =
|H(2 f i) H
r
(2 f i)|
|H(2 f i)|
,
f,abs
(f) = |H(2 f i) H
r
(2 f i)|
are used. Here
f,rel
(t) is the relative error,
f,abs
(t) is the absolute error, H is
the transfer function of the original system and H
r
is the transfer function of the
reduced system.
The errors of the impulse response are computed in the interval from t = 0s to t = 1s.
The starting vector was x
0
= 0 and the input impulse was u(t) = sin (4t).
The errors of the frequency response was computed in the interval from 0 Hz to 1000
Hz.
For the Pade-via-Arnoldi- and the Pade-via-Lanczos method the expansion point s = 0
was chosen. For all other methods the expansion points were chosen at the beginning,
at the midpoint and at the end of the frequency interval of interest, so the expansion
points 2(0, 500i, 1000i)
T
were chosen.
5 NUMERICAL RESULTS 20
5.3 Results of MOR for rst order systems
In the following Table 1 the chosen expansion points, the dimension of the reduced
system, the time to reduce the system, the maximal relative approximation error of the
transfer function and the maximal absolute approximation error of the time response
for the four methods to reduce rst order systems are shown.
In the Figure 5 to Figure 13 the four approximations of the output responses of the
reduced system are labeled with y
1
to y
4
, they are compared with the four approxima-
tions of the output responses of a modal-reduced system, labeled with y
1m
to y
4m
.
Table 1: Chosen expansion points, reduced dimension, reduction time and errors of the transfer
function and the time response.
expansion reduced time to
points dimension reduce the max(
f,rel
(f)) max(
t,rel
(t))
system [s]
PVA 0 60 2, 09 0.09 1, 03 10
8
PVL 0 60 3, 38 0, 00975 9, 7 10
8
DRA 2(0, 500i, 1000i)
T
60 6, 8 2 10
5
8, 8 10
8
DRL 2(0, 500i, 1000i)
T
60 7, 3 2, 8 10
5
1, 1 10
7
5.3.1 Results with PVA
In the following the experimental results with the Pade-via-Arnoldi method are shown:
In Figure 5 and Figure 6 the approximation error of the transfer function and the
error in the time response of the PVA-reduced system are plotted. The approxima-
tion close to the expansion point s = 0 is very accurate, but at higher frequencies the
approximation of the transfer function gets worse. The error gets worse at high frequen-
cies, because the PVA-method only expand at the frequency 0. At higher frequencies
no information about the transfer function is detected and so the approximation of the
transfer function gets worse. The accuracy is not as high as the accuracy of the modal
approximations because PVA detects moments, which are less important for a exact
approximation. In the time response the Krylov- and the modal-reduction results in
similar approximations.
5 NUMERICAL RESULTS 21
10
1
10
2
10
3
10
20
10
15
10
10
10
5
Frequency [Hz]

f,a
b
s
(
f
)
absolute error


10
1
10
2
10
3
10
10
10
8
10
6
10
4
10
2
10
0
Frequency [Hz]

f,r
e
l (
f
)
relative error


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 5: Transfer function approximation error of the reduced system: top: absolute error,
bottom: relative error.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
12
10
10
10
8
10
6
Time [s]

t,a
b
s
(
t
)
absolute error:
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
6
10
4
10
2
10
0
Time [s]

t,r
e
l (
t
)
relative error:
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 6: Time response error of the reduced system: top: absolute error, bottom: relative
error.
5 NUMERICAL RESULTS 22
5.3.2 Results with PVL
In the following the experimental results with the Pade-via-Lanczos method are shown:
In Figure 7 and Figure 8 the approximation error of the transfer function and the
error in the time response of the PVL-reduced system are plotted. The results are
nearly the same as the results of the Pade-via-Arnoldi method. The relative error in
the time domain at t = 0, 52 s; t = 0, 74 s; t = 0, 98 s is a little bit better than the
relative error after reduction with the Pade-via-Arnoldi method.
10
1
10
2
10
3
10
20
10
15
10
10
10
5
Frequency [Hz]

f,a
b
s
(
f
)
absolute error


10
1
10
2
10
3
10
10
10
8
10
6
10
4
10
2
10
0
Frequency [Hz]

f,r
e
l (
f
)
relative error


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 7: Transfer function approximation error of the reduced system: top: absolute error,
bottom: relative error.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
12
10
10
10
8
10
6
Time [s]

t,a
b
s
(
t
)
absolute error:


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
6
10
4
10
2
10
0
Time [s]

t,r
e
l (
t
)
relative error:


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 8: Time response error of the reduced system: top: absolute error, bottom: relative
error.
5 NUMERICAL RESULTS 23
5.3.3 Results with DRA
In the following the experimental results with the DRA-method are shown:
In Figure 9 and Figure 10 the approximation error of the transfer function and the
error in the time response of the DRA-reduced system are plotted. Over the whole
frequency interval the approximations are of high accuracy. Contrary to the results of
the PVA-method and the PVL-method which use only one expansion point this method
yields approximations of higher accuracy. The maximum absolute approximation error
of the transfer function with modal reduction is 10
10
contrary to 10
14
after reduction
with DRA. The maximum relative approximation error of the transfer function with
modal reduction is 10
2
contrary to 10
5
after reduction with DRA. In comparison
with the PVA- and PVL-methods the maximum absolute approximation error of the
transfer function reduced with PVA/PVL is 10
9
contrary to 10
14
after reduction with
DRA. The maximum relative approximation error of the transfer function reduced with
PVA/PVL is 10
1
contrary to 10
5
after the reduction with DRA. At the expansion
points there are the bests approximations. of transfer function.
The approximation results of the time response after reduction with this method are
similar to the results with the modal reduction. The relative approximation error of the
time domain at time points t = 0.211 s and t = 0, 375 s after reduction with PVA/PVL
is smaller than after reduction with DRA.
10
1
10
2
10
3
10
25
10
20
10
15
10
10
Frequency [Hz]

f,a
b
s
(
f
)
absolute error


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
10
1
10
2
10
3
10
15
10
10
10
5
10
0
Frequency [Hz]

f,r
e
l (
f
)
relative error


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 9: Transfer function approximation error of the reduced system: top: absolute error,
bottom: relative error.
5 NUMERICAL RESULTS 24
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
12
10
10
10
8
10
6
Time [s]

t,a
b
s
(
t
)
absolute error:
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
6
10
4
10
2
10
0
Time [s]

t,r
e
l (
t
)
relative error:
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 10: Time response error of the reduced system: top: absolute error, bottom: relative
error.
5 NUMERICAL RESULTS 25
5.3.4 Results with DRL
In the following the experimental results with the DRL-method are shown:
In Figure 11 and Figure 12 the approximation error of the transfer function and
the error in the time response of the DRL-reduced system are plotted. The results are
nearly the same as the results with the Dual-Rational-Arnoldi method. In the frequency
interval 300 Hz to 1000 Hz the approximation of the transfer function is not as exact as
the approximation with DRA. At 7 Hz the relative approximation error of the transfer
function after reduction with the DRL-method is smaller than the approximation error
after reduction with the DRA-method.
After reduction with the DRL-method in the time domain the approximation error at
t = 0, 7 s is smaller than the approximation error after reduction with the DRA-method.
10
1
10
2
10
3
10
25
10
20
10
15
10
10
Frequency [Hz]

f,a
b
s
(
f
)
absolute error


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
10
1
10
2
10
3
10
15
10
10
10
5
10
0
Frequency [Hz]

f,r
e
l (
f
)
relative error


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 11: Transfer function approximation error of the reduced system: top: absolute error,
bottom: relative error.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
12
10
10
10
8
10
6
Time [s]

t,a
b
s
(
t
)
absolute error:
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
6
10
4
10
2
10
0
Time [s]

t,r
e
l (
t
)
relative error:
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 12: Time response error of the reduced system: top: absolute error, bottom: relative
error.
5 NUMERICAL RESULTS 26
5.4 Results of MOR for Second Order Systems
In the following Table 2 the chosen expansion points, the dimension of the reduced
system, the time to reduce the system, the maximal relative approximation error of the
transfer function and the maximal absolute approximation error of the time response
for the two methods DRA PD and RL PD are shown.
Table 2: Chosen expansion points, reduced dimension, reduction time and errors of the
transfer function and the time response.
expansion reduced time to
points dimension reduce the max(
f,rel
(f)) max(
t,rel
(t))
system [s]
DRA PD 2(0, 500i, 1000i)
T
60 3, 7 2.1 10
5
1, 4 10
7
DRL PD 2(0, 500i, 1000i)
T
60 3, 8 2.6 10
5
1, 4 10
7
5.4.1 Results with DRA PD
In the following the experimental results with the DRA PD-method are shown:
In Figure 13 and Figure 14 the approximation error of the transfer function and the
error in the time response of the DRA PD-reduced system are plotted. Over the whole
frequency interval the approximation is accurate. The results are nearly the same as
the results after reduction the rst order system with the DRA or the DRL method.
At higher frequencies the method is not as exact as the DRA-method for rst order
systems. The absolute error at 300 Hz is 1, 64 10
13
in contrast to 4 10
18
after
reduction with the DRA-method. At the whole frequency interval the method is more
exact than the modal reduction. In the time domain the second output is approximated
worse, even the modal reduction has a smaller error.
10
1
10
2
10
3
10
25
10
20
10
15
10
10
Frequency [Hz]

f,a
b
s
(
f
)
absolute error
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
10
1
10
2
10
3
10
15
10
10
10
5
10
0
Frequency [Hz]

f,r
e
l (
f
)
relative error
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 13: Transfer function approximation error of the reduced system: top: absolute error,
bottom: relative error.
5 NUMERICAL RESULTS 27
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
12
10
10
10
8
10
6
Time [s]

t,a
b
s
(
t
)
absolute error:


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
6
10
4
10
2
10
0
Time [s]

t,r
e
l (
t
)
relative error:


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 14: Time response error of the reduced system: top: absolute error, bottom: relative
error.
5 NUMERICAL RESULTS 28
5.4.2 Results with DRL PD
In the following the experimental results with the DRL PD-method are shown:
In Figure 15 and Figure 16 the approximation error of the transfer function and
the error in the time response of the DRL PD-reduced system are plotted. The results
are nearly the same as the results with the Dual-Rational-Arnoldi method. Like the
DRA PD-method the error of the fourth output gets worse at 300 Hz. Also the ap-
proximation of the time domain of the second and fourth output (y
2
and y
4
)are not
as exact as the approximations or the DRA-method or the modal reduction.
10
1
10
2
10
3
10
25
10
20
10
15
10
10
Frequency [Hz]

f,a
b
s
(
f
)
absolute error
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
10
1
10
2
10
3
10
15
10
10
10
5
10
0
Frequency [Hz]

f,r
e
l (
f
)
relative error
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 15: Transfer function approximation error of the reduced system: top: absolute error,
bottom: relative error.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
12
10
10
10
8
10
6
Time [s]

t,a
b
s
(
t
)
absolute error:


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
6
10
4
10
2
10
0
Time [s]

t,r
e
l (
t
)
relative error:


y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
y
1,m
y
1
y
2,m
y
2
y
3,m
y
3
y
4,m
y
4
Figure 16: Time response error of the reduced system: top: absolute error, bottom: relative
error.
6 CONCLUSIONS 29
6 Conclusions
Several Krylov methods to reduce large scale FE models of machine tools were tested.
For that methods to reduce state space models and methods to reduce second order
models were implemented and modied to handle the special matrices of the FE models
in an optimal way. It seems as if the approximations of the methods with only one
expansion point are less exact than the approximations of the methods with more than
one expansion point. It also seems that the approximations with methods which reduce
state space models are nearly of the same exactness than the approximations of the
methods which reduce second order models. For the problem at hand the methods with
more than one expansion point should be used, because they give better results at a
wider range of frequencies.
After testing a part of a machine tool with 738 degrees of freedom the next step is
to test a complete machine tool with up to a million degrees of freedom. During the
whole reduction process the computation of the LU-decomposition of a large and sparse
matrix is most time consuming. For very large models the application of algorithms
with indirect solvers for linear systems should be tested. Another open question is how
to adapt the order of the reduced system during the reduction process, for that rst an
error bound or a error estimator must be found.
References
[1] Antoulas, A.C., Beattie, C. and Gugercin, S.: H
2
Model Reduction For Large-
Scale Linear Dynamical Systems Accepted in SIAM Journal on Matrix Analysis
and Applications, 2007.
[2] Feldmann, P. and Freund, R.W.: Ecient linear circuit analysis by Pad e approx-
imation via the Lanczos process IEEE Trans. Computer-Aided Design vol. 14,
pp. 639-649, 1995.
[3] Gallivan, K, Grimme, E.J. and Dooren, P.V.: A rational Lanczos algorithm for
model reduction, Numerical Algorithms, vol. 12, pp. 33-63, 1996.
[4] Grimme, E. J.: Krylov projection methods for model reduction, Ph.D. disserta-
tion, University of Illinois at Urbana-Champaign, 1997.
[5] Beattie, C. and Gugercin, S.: Krylov-based model reduction of second-order sys-
tems with proportional damping Proceedings of the 44th IEEE Conference on
Decision and Control, pp. 5905-5910, 2005.
[6] Odabasioglu, M. and Celik, M.: Passive Reduced-Order Interconnect Macromod-
eling Algorithm IEEE Trans. Computer-Aided Design, pp. 645-654, 1999.
REFERENCES 30
[7] Beattie, C. and Gugercin, S.: Krylov-based model reduction of second-order sys-
tems with proportional damping Proceedings of the 44th IEEE Conference on
Decision and Control, pp. 5905-5910, 2005.

You might also like