Handout NUM Eigenwerte

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Einführung in Numerical Computing

Eigenwertprobleme

G. Uchida, W. Gansterer

Universität Wien

Oktober 2023

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Localizing Eigenvalues

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Localizing Eigenvalues

▶ For some purposes, we may not need high accuracy in


determining the eigenvalues
▶ Only relatively crude information about location in the complex
plane

Simplest “localization” result:


▶ If λ is an eigenvalue of A, then

|λ| ≤ ∥A∥

▶ Holds for any matrix norm induced by a vector norm!


⇒ All eigenvalues of A lie in a disk around the origin in the complex
plane with radius ∥A∥

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Localizing Eigenvalues

Sharper estimate: Gershgorin’s Theorem


▶ All eigenvalues of an n × n matrix are contained within the union
of n disks
▶ k th disk is centered at akk and has radius j̸=k |akj |
P

How can we prove this?

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Localizing Eigenvalues – Proof of Gershgorin

▶ Let λ be any eigenvalue of A


▶ Corresponding eigenvector x normalized so that ∥x∥∞ = 1
▶ Let xk be an entry of x such that |xk | = 1
(↔ Definition of ∞-norm!)
▶ Because Ax = λx, we have
n
X X
akj xj = λxk ⇒ (λ − akk ) xk = akj xj
j=1 j̸=k

so that X X
|λ − akk | ≤ |akj | · |xj | ≤ |akj |
j̸=k j̸=k

▶ Apply theorem to A⊤ ⇒ similar result holds for disks defined by


off-diagonal absolute column sums

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Gershgorin’s Theorem

Several useful implications, e. g.:

▶ Strictly diagonally dominant matrix must be nonsingular



P
j̸=i |aij | < |aii |

▶ Why?

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Gershgorin’s Theorem

Several useful implications, e. g.:

▶ Strictly diagonally dominant matrix must be nonsingular



P
j̸=i |aij | < |aii |

▶ Why?
▶ Because zero cannot lie in any of the Gershgorin disks!

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Example: Gershgorin Disks

 
4, 0 −0, 5 0, 0
A1 =  0, 6 5, 0 −0, 6 
0, 0 0, 5 3, 0

▶ Eigenvalues of A1 are denoted by ×


 
4, 0 0, 5 0, 0
A2 =  0, 6 5, 0 0, 6 
0, 0 0, 5 3, 0

▶ Eigenvalues of A2 are denoted by •

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Example: Gershgorin Disks

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Problem Transformations

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Problem Transformations

▶ Many numerical eigensolvers are based on reducing the original


matrix to a simpler form
▶ “Simpler” form ⇔ eigenpairs “easy” to determine

⇒ Identify
▶ what types of transformations leave eigenvalues either unchanged
or easily recoverable
▶ for what types of matrices eigenvalues are easily determined

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Problem Transformations

1. Shift: Subtract a constant scalar from each diagonal entry of a


matrix
▶ (A − σI) x = (λ − σ) x
▶ Eigenvalues are shifted, eigenvectors are unaffected

2. Inversion: If A nonsingular with Ax = λx and x ̸= 0, then


▶ λ ̸= 0
▶ A−1 x = λ1 x
▶ Eigenvalues are the reciprocals, eigenvectors are unaffected

3. Powers: Ax = λx ⇒ A2 x = λ2 x ⇒ . . . ⇒ Ak x = λk x
▶ Eigenvalues are raised to the k th power
▶ Eigenvectors are unchanged for any positive integer k

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Problem Transformations

4. Polynomials: If

p(t) = c0 + c1 t + c2 t2 + · · · + ck tk

we can define:

p(A) := c0 I + c1 A + c2 A2 + · · · + ck Ak

If Ax = λx, then
▶ p(A)x = p(λ)x
▶ Eigenvectors of p(A) are the same as those of A

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Problem Transformations

5. Similarity: Now change the eigenvectors of a matrix in a


systematic way, but leave eigenvalues unchanged
▶ Similarity transformation: B is similar to A if nonsingular matrix T
exists such that
B = T −1 AT
▶ By = λy ⇒ T −1 AT y = λy ⇒ AT y = λT y

⇒ A and B have the same eigenvalues


⇒ y eigenvector of B, then x = T y is eigenvector of A
(“backtransformation”)

▶ Note: Similar matrices must have the same eigenvalues,


but two matrices that have the same eigenvalues are not
necessarily similar!

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Example: Similar Matrices

     
3 1 1 −1 1 −1 4 0
AT = = = TD
1 3 1 1 1 1 0 2

D = diag(λ1 , λ2 )

     
−1 0, 5 0, 5 3 1 1 −1 4 0
T AT = = =D
−0, 5 0, 5 1 3 1 1 0 2

. . . Eigenvectors of A form the columns of the transformation matrix T

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Problem Transformations

▶ Similarity transformation requires only that the transformation


matrix T is nonsingular
▶ But it could be arbitrarily ill-conditioned (i. e., nearly singular)!
▶ Whenever possible, orthogonal or unitary similarity
transformations are strongly preferred for numerical computations
so that the transformation matrix is perfectly well-conditioned

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Power Iteration

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Power Iteration

▶ Simple method for computing a single eigenvalue and the


corresponding eigenvector
▶ Multiplies an arbitrary nonzero vector repeatedly by the n × n
matrix A
↔ Multiply initial starting vector by successively higher powers of A

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Basic Power Iteration

x0 . . . arbitrary nonzero vector


for k = 1, 2, . . . do
xk = Axk−1
end for

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Power Iteration

▶ If A has unique eigenvalue λ1 of maximum modulus with


corresponding eigenvector v1
→ Power iteration converges to a multiple of v1
Why?
▶ Assume that A is diagonalizable
▶ Express x0 as a linear combination
n
X
x0 = αj vj ,
j=1

where vj are the eigenvectors of A

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Power Iteration

Then:
xk = Axk−1 = A2 xk−2 = · · · = Ak x0 =

= Ak nj=1 αj vj = nj=1 αj Ak vj =
P P

= λk1 α1 v1 + nj=2 λkj αj vj =


P

  k 
λ
= λk1 α1 v1 + nj=2 λ1j αj vj
P

 k
λj λj
For j > 1, λ1 < 1 so that λ1 →0
⇒ Only term corresponding to v1 is nonvanishing

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Example: Power Iteration
   
3 1 0
A= , x0 =
1 3 1

▶ xk is converging to a multiple of the eigenvector 1 1 T


 

▶ The ratio of the values of a given nonzero component of xk from


one iteration to the next converges to the dominant eigenvalue
λ1 = 4
G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing
Power Iteration

Power iteration usually works in practice, but can fail because:


▶ x0 may have no component in the dominant eigenvector v1
(α1 = 0)
▶ Extremely unlikely if x0 is chosen randomly
▶ Not a problem in practice – rounding error usually causes α1 ̸= 0
▶ There may be more than one eigenvalue having the same
(maximum) modulus
▶ Iteration may converge to a vector that is a linear combination of the
corresponding eigenvectors
▶ Can happen in practice (e. g., complex conjugate pair)
▶ Real matrix and real starting vector
▶ Iteration cannot converge to a complex (eigen)vector

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Normalized Power Iteration

Geometric growth of components at each iteration risks eventual


overflow (or underflow if |α1 | < 1)
⇒ In practice the approximate eigenvector is rescaled at each
iteration to have norm 1 (typically using the ∞-norm)
. . . normalized power iteration
▶ With this normalization:
▶ xk → ∥vv1∥
1 ∞
▶ ∥yk ∥∞ → |λ1 |

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Normalized Power Iteration

x0 = arbitrary nonzero vector


for k = 1, 2, . . . do
yk = A xk−1
xk = ∥yykk∥

end for

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Example: Normalized Power Iteration
   
3 1 0
A= , x0 =
1 3 1

▶ Initial vector:
     
0 1 −1
x0 = =1 +1 = α1 v1 + α2 v2
1 1 1

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Example: Normalized Power Iteration

▶ v1 dominates as the other component decays like


 k  k
λ2 1
=
λ1 2
▶ Hence iteration vector converges to v1

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing


Properties of Matrices and Eigenvalue Problems

Important questions for the choice of algorithm and software for


solving an eigenvalue problem:

▶ Matrix real or complex?


▶ Relatively small and dense or large and sparse?
▶ Any special properties, or a general matrix?
▶ Are all the eigenvalues needed or only a few?
▶ Are only the eigenvalues needed or are the corresponding
eigenvectors required as well?
▶ ...

G. Uchida, W. Gansterer (Univ. Wien) Einf. in Numerical Computing

You might also like