Ddis Slides 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 129

Large Linear system resolution, domain decomposition and

iterative solver PART 1

Nicolas Chevaugeon

GeM - Institut de Recherche en Genie


Civil et Mecanique

UMR CNRS - Ecole


Centrale Nantes - Universite de Nantes

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Section 1
Introduction

2 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

This class is dedicated to the Resolution of large system of equation.


Such system arise in most computational mechanics applications.
Typically one has to solve Ku = f
We will look at some details, related to the computer implementation
of this problem, focussing on
I

Efficiency,

Problems related to the size and the memory managment ...

Accuracy.

Parallelism.

Sparcity of system

Direct/Iterative Solvers

3 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Layout

introduction, motivation

matrix storage

efficient basic linear algebra operation : blas

matrix properties

I
I

dense matrix factorisation and system resolution


sparse direct solver

iterative solver

preconditioners

domain decomposition methods

multigrid (Jeroen Wackers)

4 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Recommended books

Matrix Computations 1996, by Gene H. Golub , Charles F. van


Van Loan.

Iterative Methods for Sparse Linear Systems, by Yousef Saad.

Numerical methods in computational mechanics M. Okrouhlik.

5 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Introduction

size of problem. compared to memory of typical computer.

Most numerical method for pde : transform the pde into a set of
discrete non linear equation which are the linearised to be solved
iteratively.

each iteration : a linear system to solve. Size scale with number


of dof.

number of dof in 3D : (L/h)3

6 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Memory size

size
1000
10000
20000
40000

size2
1.e6
1.e8
4.e8
16.e8

memory usage
7.6 Mb
762.94 Mb
3. Gb
12. Gb

Table : Memory usage of a nxn matrix using double precision

7 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Many applications give rise to (non) linear systems of equations.


Typically the system matrices are :
I

Large , 108 unknowns are not exceptional anymore;

sparse, only a fraction of the entries of the matrix is nonzero;

structured : the matrix is often symmetric or of symmetric patern


and is banded.

The Matrix have special numerical properties eg : positive definite,


Diagonal dominant ...

8 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Traditionally there are 2 classes of methods :


I

direct method, usually a factorisation of the matrix These method


are exact up to truncature error.

iterative method : a series of approximated solution is


constructed, using simple operation like matrix vector product.

For Many problem it is not clear what method is best between direct
or iteratives solver.
I

Direct methods are robust. But can need (very) large memory,
and may take to long for some application

Iterative methods may not converge, but if low precision only is


needed, can be much faster than direct method.

A combinaison of direct and iterative may be used domain


decomposition, iterative refinment.

9 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Does the best iterative method exist ?

In general : No. For certain class of matrix : yes. Succes of iterative


methods depends on :
I

properties of A () Size, sparsity structure, symmetry,


conditionning ..)

Computer Architecture (scalar, parallel, memory organisation ...)

Memory space available.

Accuracy required.

10 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Section 2
Matrix Storage

11 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Storing a Matrix seems like something simple ... Its just an array with
two dimension !

12 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Storing a Matrix seems like something simple ... Its just an array with
two dimension !
Well, in fact all that can really be stored in computer are data in
memory address. A system can reserve contiguous memory
address, to mimic a 1 dimensional array.
So how to store a 2d array in what is basically a 1d array ?

12 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Storing a Matrix seems like something simple ... Its just an array with
two dimension !
Well, in fact all that can really be stored in computer are data in
memory address. A system can reserve contiguous memory
address, to mimic a 1 dimensional array.
So how to store a 2d array in what is basically a 1d array ?
What if storing all the entry of a matrix is not practicable and or
consume to much space ?

12 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Storing a Matrix seems like something simple ... Its just an array with
two dimension !
Well, in fact all that can really be stored in computer are data in
memory address. A system can reserve contiguous memory
address, to mimic a 1 dimensional array.
So how to store a 2d array in what is basically a 1d array ?
What if storing all the entry of a matrix is not practicable and or
consume to much space ?
Most matrix comming from numerical methods such as finite element,
finite difference or finte volume are very sparse. Which means that
the majority of the terms are equal to zero. Why store them then ?

12 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Storing a Matrix seems like something simple ... Its just an array with
two dimension !
Well, in fact all that can really be stored in computer are data in
memory address. A system can reserve contiguous memory
address, to mimic a 1 dimensional array.
So how to store a 2d array in what is basically a 1d array ?
What if storing all the entry of a matrix is not practicable and or
consume to much space ?
Most matrix comming from numerical methods such as finite element,
finite difference or finte volume are very sparse. Which means that
the majority of the terms are equal to zero. Why store them then ?
These simple observations leads to a variety of matrix storage
scheme that any numericists need to know of : They affect the way
simple operation like Matrix vector product must be written efficiently,
how to design a solver, etc ...

12 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

outline

Full storage : Row major, column major

Symetric storage

Band storage

Skyline storage

Coordinate Storage

Compressed row storage/ column storage

13 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

full row major storage

a1,1
a2,1

A= .
..

a1,2
a2,2

a1,n
a2,n

..
.

an,1

an,2

an,n

can be strored in an array val in the full storage format by row. This is
called the row major format.
k
val

1
a1,1

2
a1,2

n
a1,n

n+1
a2,1

2n
a2,n

(n 1) n + 1
an,1

nn
an,n

14 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

full row major storage

We thus have :
I

ai,j = val[(i 1) n + j]

val[k] = a((k 1)/n)+1,((k1)%n+1)

We are using here the so called fortran convention, where array index
start at 1. In c indexing typically start at 0. be carefull when using
a library ... Which convention is used ??

15 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

full column major storage

The matrix A can be strored in an array val in the full storage format
by column. This is called the column major format.
k
val

1
a1,1

2
a2,1

n
an,1

n+1
a1,2

ai,j = val[(i + (j 1) n]

val[k] = a((k1)%n+1),((k1)/n)+1

2n
an,2

(n 1) n + 1
a1,n

nn
an,n

16 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

row major or column major format are widely used, in particular for
dense linear algebra. Blas and lapack libraries in particular expect a
column major format. Of course this format can also be used to store
rectangular matrix. The storage space used is typicallyNotion of
Leading direction ...

17 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

triangular storage

Triangular matrix or symmetric matrix are typically stored using


triangular storage. It is similar to the full storage but only n(n + 1)/2
terms are stored, which mean roughly half of the memory is saved.
Example :

a1,1 a1,2 a1,3


A = a1,2 a2,2 a2,3
a1,3 a2,3 a3,3
k
val

1
a1,1

2
a1,2

3
a1,3

4
a2,2

5
a2,3

6
a3,3

18 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

A typical sparse system


Finite Element Analysis, 2d, number of nodes = 1128. 2208 DOFS.
nnz = 30118. sparsity: 0.06%

19 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

A typical sparse system


Before Renumbering :
bandwidth
Full storage
Full storage sym
Band storage
Sym Band storage

2159
4.6 e6 doubles
2.4e6 doubles
9.5 e6 doubles
4.7 e6 doubles

20 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

renumbering
After renumbering using reverse cutHill McKee :
bandwidth
Full storage
Full storage sym
Band storage
Sym Band storage

178
4.6 e6 doubles
2.4e6 doubles
7.8 e5 doubles
3.9 e5 doubles

21 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

band storage
Band storage depends on two value : the number of non-zero lower
subdiagonal kl and the number of superdiagonal ku.
the bandwith is ku + kl + 1
Example of a 5x5 matrix, with ku = 1 and kl = 2

a1,1
a2,1

A=
a3,1
0
0

a1,2
a2,2
a3,2
a4,2
0

Band storage :
k
val
k
val

11
a2,1

0
a2,3
a3,3
a4,3
a5,3
m
5

2
a1,2
12
a3,2

0
0
a3,4
a4,4
a5,4

0
0

a4,5
a5,5

n ku kl
5 1
2
3
4
a2,3 a3,4
13
14
a4,3 a5,4

5
a4,5
15

6
a1,1
16
a3,1

7
a2,2
17
a4,2

8
a3,3
18
a5,3

9
a4,4
19

10
a5,5
20

22 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

band storage

A matrix in band storage take the space of (kl + ku + 1)xn

23 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

skyline storage

Usefull for skyline matrices, also called variable band or profile


matrices.

Mostly used for symmetric matrix. Variant exist for


non-symmetric.

The matrix element are stored column by column, each column


beginning with the first non zero-element and ending with the
diagonal element. An array col ptr store the beginning of column
j.

Very popular for direct solver for medium sized system.

24 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

skyline storage
Example:

a1,1
0

A=
a1,3
0
0

0
a2,2
0
a2,4
0

a1,3
0
a3,3
a3,4
0

k
1
2
val
a1,1
a2,2
j
1 2
col ptr(j) 1 2

0
a2,4
a3,4
a4,4
a4,5

0
0

a4,5
a5,5

3
4 5
a1,3 0 a3,3
3 4 5 6
3 6 9 11

6
a2,4

7
a3,4

8
a4,4

9
a4,5

10
a5,5

25 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

If the matrix is sparse, but no good structure can appear, we need a


special storage scheme so that only the non-zero terms are stored

26 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Coordinate Storage
The simplest approach to store sparse matrix.
the values of the non-zero elements are stored with their coordinate,
in 3 separates arrays : val, row idx, col idx. The order usally do not
matter.

a1,1
0

A=
0
a4,1
0
k
val(k)
row index(k)
col idx(k)

a1,2
a2,2
0
0
a5,2
1
a41
4
1

0
a2,3
a3,3
0
0
2
a35
3
5

a1,4
a2,4
0
a4,4
0
3
a33
3
3

4
a52
5
2

0
0

a3,5

a4,5
a5,5
5
a11
1
1

6
a23
2
3

7
a55
5
5

8
a24
2
4

9
a12
1
2

10
a44
4
4

11
a14
1
4

12
a45
4
5

13
a22
2
2

27 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Compressed row storage

a
0

A=
0
g
j

0
0
e
h
0

b
c
0
0
k

0
d
f
0
0

0
0

i
0

This matrix as 11 non zero term (number of non zero, nnz = 11).
We store the entry row by row in the val array of dimension nnz
The array col index of nnz integer store the column index of the
corresponding entry.

28 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Compressed row storage

a
0

A=
0
g
j

0
0
e
h
0

b
c
0
0
k

k
val(k)
col index(k)

0
d
f
0
0

0
0

i
0

1
a
1

2
b
3

3
c
3

4
d
4

5
e
2

6
f
5

7
g
1

8
h
2

9
i
5

10
j
1

11
k
3

28 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Compressed row storage

a
0

A=
0
g
j

0
0
e
h
0

b
c
0
0
k

0
d
f
0
0

0
0

i
0

k
1 2
3 4
5 6 7 8 9 10 11
val(k)
a b
c d
e f
g h i
j
k
col index(k) 1 3
3 4
2 5 1 2 5 1
3
The Array row ptr(i) is a array of integer of size n + 1 (where n is the
number of line of the matrix) that contain the index of the start of line
(i) in array val, for all entry but the last that contain nnz+1

28 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Compressed row storage

a
0

A=
0
g
j

0
0
e
h
0

b
c
0
0
k

0
d
f
0
0

0
0

i
0

k
1 2
3 4
5 6
val(k)
a b
c d
e f
col index(k) 1 3
3 4
2 5
i
1 2 3 4 5
6
row ptr(i) 1 3 5 7 10 12

7
g
1

8
h
2

9
i
5

10
j
1

11
k
3

28 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

How to find entry (i,j) of a matrix A in compressed row storage ?

29 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

How to find entry (i,j) of a matrix A in compressed row storage ?


I

Find the begin and end index of line i :


kb = row ptr (i), ke = row ptr (i + 1) 1

29 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

How to find entry (i,j) of a matrix A in compressed row storage ?


I

Find the begin and end index of line i :


kb = row ptr (i), ke = row ptr (i + 1) 1

search for j in col index(kb : ke )

29 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

How to find entry (i,j) of a matrix A in compressed row storage ?


I

Find the begin and end index of line i :


kb = row ptr (i), ke = row ptr (i + 1) 1

search for j in col index(kb : ke )

if j not found A(i, j) = 0

29 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

How to find entry (i,j) of a matrix A in compressed row storage ?


I

Find the begin and end index of line i :


kb = row ptr (i), ke = row ptr (i + 1) 1

search for j in col index(kb : ke )

if j not found A(i, j) = 0

if j found at index kj in (kb , ke ), A(i, j) = val(kj )

29 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

We just cover most of the matrix storage scheme.


Other scheme exist.
In particular, block variant of the previous schemes are also used.

30 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Section 3
Matrix Multiplication

31 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Basic Dense Matrix Multiplication

A Rm,p

B Rp,n

C Rm,n

C = AB

32 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Basic Dense Matrix Multiplication

A Rm,p

B Rp,n

C Rm,n

C = AB
matrix multiplication algorithm :
for i = 1 to m do
for j = 1 to n do
for k = 1 to p do
C(i,j) = A(i,k)*B(k,j) +C(i,j)
end
end
end

32 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Basic Dense Matrix Multiplication

A Rm,p

B Rp,n

C Rm,n

C = AB
one variant, reordering the loop :
for j = 1 to n do
for i = 1 to m do
for k = 1 to p do
C(i,j) = A(i,k)*B(k,j) +C(i,j)
end
end
end

32 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Basic Dense Matrix Multiplication


One can construct easily 6 variants : ijk
Does it matter ?

jik

ikj

jki

kij

kji

33 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Basic Dense Matrix Multiplication


One can construct easily 6 variants : ijk
Does it matter ?
Well lets mesure.
Pick a matrix from Matrix Market

jik

ikj

jki

kij

kji

http://math.nist.gov/MatrixMarket/
Matrix size : 1806X 1806.
number of floating point operations : 18063 2 = 11, 8e9.
Note that the number of operations is the same whatever the
algorithm.

33 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Basic Dense Matrix Multiplication


One can construct easily 6 variants : ijk
Does it matter ?
Well lets mesure.
Pick a matrix from Matrix Market

jik

ikj

jki

kij

kji

http://math.nist.gov/MatrixMarket/
Matrix size : 1806X 1806.
number of floating point operations : 18063 2 = 11, 8e9.
Note that the number of operations is the same whatever the
algorithm.
algorithm ijk
jik
ikj
jki
kij
kji
time (s)
19.79 28.47 50.63 9.82 53.57 25.23

33 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect
What changed from one version to another is the time it take to
access and modify the data.

34 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect
What changed from one version to another is the time it take to
access and modify the data.
When something has to be computed, it has to be moved from the
main memory to the register, up and down the memory hierachy.

34 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect
What changed from one version to another is the time it take to
access and modify the data.
When something has to be computed, it has to be moved from the
main memory to the register, up and down the memory hierachy.

The running time of the loops are dominated by the memory access
to the array, usually not by the operation them-self.
34 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect

CPU do not access memory byte to byte, Instead they fetch


memory chunks called cache lines (typically 64 bytes.).

35 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect

CPU do not access memory byte to byte, Instead they fetch


memory chunks called cache lines (typically 64 bytes.).

When a memory location is read, an entire cache line is moved


to the cache. Thats the most time consuming operation.

35 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect

CPU do not access memory byte to byte, Instead they fetch


memory chunks called cache lines (typically 64 bytes.).

When a memory location is read, an entire cache line is moved


to the cache. Thats the most time consuming operation.

If for the next instruction, every data are already in the cache,
nothing need to be fetch it goes much faster.

35 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Cache effect

CPU do not access memory byte to byte, Instead they fetch


memory chunks called cache lines (typically 64 bytes.).

When a memory location is read, an entire cache line is moved


to the cache. Thats the most time consuming operation.

If for the next instruction, every data are already in the cache,
nothing need to be fetch it goes much faster.

In the matrix multiplication example, The Matrix were stored in


column major format. It means, that the inner loop would go
much faster if during this loop the next element computed is on
the next line of each matrix and therefore already in the cache.

35 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Of course that not the only factor. One can look at


I

Processor level parallelism permit to performs more that one


operation at once if evrey things are ready in the registery.

Pipelining (chaining operation like on an assembly line) at


processor level

On multicore processor, The operation can be parralleze at


thread level

..

36 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Of course that not the only factor. One can look at


I

Processor level parallelism permit to performs more that one


operation at once if evrey things are ready in the registery.

Pipelining (chaining operation like on an assembly line) at


processor level

On multicore processor, The operation can be parralleze at


thread level

..

LESSON : Even for something as simple as matrix multiplication it


can be very difficult to do it in the most efficient way. Typical good
implementation will work on matrix block having the perfect size to fit
in cache memory, taking advantage of piplining and all level of
parallelism.

36 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Of course that not the only factor. One can look at


I

Processor level parallelism permit to performs more that one


operation at once if evrey things are ready in the registery.

Pipelining (chaining operation like on an assembly line) at


processor level

On multicore processor, The operation can be parralleze at


thread level

..

LESSON : Even for something as simple as matrix multiplication it


can be very difficult to do it in the most efficient way. Typical good
implementation will work on matrix block having the perfect size to fit
in cache memory, taking advantage of piplining and all level of
parallelism.
Computational kernel, that are the most used in any application need
to be fine tuned to obtain good performance. Its of course the case of
basic linear algebra computation. One has to use well programmed
library to move faster. One good example is the BLAS Library.
36 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


BLAS define an interface for some basic linear algebra algorithm
such as
I

vector copying, scalling, adding

vector dot products, norms.

Matrix vector products

Matrix matrix product

Triangular matrix solve.

This interface is mostly defined in fortran but can be called from any
language. The interface was first published in 1979, And is used
since as a building block in higher level math programming languages
and libraries (matlab; numpy, lapack ...).
BLAS subroutines are a standard API (Application programming
interface.)

37 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


BLAS library implementation exists, new version come all the time,
specially tuned for new architecture.
I

The reference and most straight forward implementation is the


one from netlib (www.netlib.org/blas). (opensource)

38 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


BLAS library implementation exists, new version come all the time,
specially tuned for new architecture.
I

The reference and most straight forward implementation is the


one from netlib (www.netlib.org/blas). (opensource)

ATLAS Automatically Tuned Linear Algebra Software.

38 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


BLAS library implementation exists, new version come all the time,
specially tuned for new architecture.
I

The reference and most straight forward implementation is the


one from netlib (www.netlib.org/blas). (opensource)

ATLAS Automatically Tuned Linear Algebra Software.

OpenBLAS Optimized blas based on GOTO blas

38 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


BLAS library implementation exists, new version come all the time,
specially tuned for new architecture.
I

The reference and most straight forward implementation is the


one from netlib (www.netlib.org/blas). (opensource)

ATLAS Automatically Tuned Linear Algebra Software.

OpenBLAS Optimized blas based on GOTO blas

clBLAS : OpenCL implementation of BLAS. OpenCL is the


developping standard to framework to write program accross
heterogeneous framework (CPU-GPU mixte)

38 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


BLAS library implementation exists, new version come all the time,
specially tuned for new architecture.
I

The reference and most straight forward implementation is the


one from netlib (www.netlib.org/blas). (opensource)

ATLAS Automatically Tuned Linear Algebra Software.

OpenBLAS Optimized blas based on GOTO blas

clBLAS : OpenCL implementation of BLAS. OpenCL is the


developping standard to framework to write program accross
heterogeneous framework (CPU-GPU mixte)

cuBLAS Optimized for NVIDIA GPU cards (cuda)

...

38 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


Each processor vendor have his own BLAS implementation tuned for
there processor :
I

ACML : part of the AMD Core MATH Library : AMD Athlon and
opteron

39 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


Each processor vendor have his own BLAS implementation tuned for
there processor :
I

ACML : part of the AMD Core MATH Library : AMD Athlon and
opteron

ESSL : IBMs Engineering and Scientific Subroutine Library,


PowerPC

39 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


Each processor vendor have his own BLAS implementation tuned for
there processor :
I

ACML : part of the AMD Core MATH Library : AMD Athlon and
opteron

ESSL : IBMs Engineering and Scientific Subroutine Library,


PowerPC
HP MLIB support IA-64 PA-RISC, x86 OPTERON under HPUX
and linux

39 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


Each processor vendor have his own BLAS implementation tuned for
there processor :
I

ACML : part of the AMD Core MATH Library : AMD Athlon and
opteron

ESSL : IBMs Engineering and Scientific Subroutine Library,


PowerPC
HP MLIB support IA-64 PA-RISC, x86 OPTERON under HPUX
and linux

intel MKL : Intel Math Kernel Library. Optimization for Intel


processor

39 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS : basic linear Algebra Subroutine


Each processor vendor have his own BLAS implementation tuned for
there processor :
I

ACML : part of the AMD Core MATH Library : AMD Athlon and
opteron

ESSL : IBMs Engineering and Scientific Subroutine Library,


PowerPC
HP MLIB support IA-64 PA-RISC, x86 OPTERON under HPUX
and linux

intel MKL : Intel Math Kernel Library. Optimization for Intel


processor

...

39 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Most linear algebra algorithm can be split into call to BLAS


subroutine.
By doing so, a programmer rely on the BLAS implementation to do
the heavy lifting, concentrating on his own algorithm.
Changing the BLAS implementation to get faster result is just then a
matter of linking to the new implementation. Nothing need to be
changed in the user code.

40 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS overview

Each routine follow the following name convention : first letter is either
s or d, for single or double precision version of the routine.
BLAS is divided in 3 Level.

41 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS overview

Level 1 : Vector operation


I

xSWAP exchange the values in 2 array

xSCAL scal one vector

xCOPY copy one vector

xAXPY add x to y

xDOT compute the dot product of 2 vector

xNRM2 Compute the 2-norm on 1 vector


...

41 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS overview
Level 2 : Matrix Vector operation
I

xGEMV general matrix vector - product

xGBMV general band matrix - vector product

xSYMV Symetric matrix - vector product

xSBMV Symetric band matrix - vector product

xTRMV Triangular matrix - vector product

xTBMV Triangular band matrix - vector product

xGER rank one update : A = A + xyT

xSYR rank one symetric update A = A + xxT

xSYR2 rank one symetric update Ay = A + xyT + xT y

xTRSV Triangular matrix solve

...

41 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS overview

Level 3 : Matrix Matrix operation


I

xGEMM general matrix matrix product

xSYMM symetric matrix matrix product

xSYRK Rank k update C = C + AAT

I
I

xSYRK2 Rank k update C = C + BAT


xTRMM triangular matrix - matrix product

xTRSM Triangular matrix solve with matrix right hand side.

41 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

BLAS function are very flexible, while behing compatible with most
language. This flexibility come at the price of the number of argument
to be pass to each subroutine.
For example AXPY that perform the operation y = x + y need 6
arguments :
xAXPY(N, ALPHA, X, INCX, Y, INCY)
xGEMM That perform the operation C = AB + C needs 13
arguments :
xGEMM(TRANSA, TRANSB, M, N, K, ALPHA, A, LDA,
B, LDB, BETA, C, LDC)

42 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

back to mesurement

Same matrix as in the first example. Computation time for the


product.
I

best hand made loop : 9.81 s


Using netlib blas : 4.56 s

Using open blas : 1.4 s

43 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Blas Benchmark

44 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Blas Benchmark

Lesson : Whenever possible, formulate algorithm in terms of Level 3


blas operation.
44 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Lots of algorithm can be recast in matrix multiplication For example,


integration of a bilinear form over an element :
Z
Mij = Ni (x)Nj (x)d
e
ngauss

wk Ni (xk )Nj (xk )J(xk )

k =1

45 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

First Compute all the shape functions at each Gauss point and Put
the result in the matrix N.
Note that this matrix is the same for all the element of the same type
when using iso parametric element and the same integration rule for
all the element. Therefor it can be computed once and for all at the
begining of the programme

N1 (x1 )

Nn (x1 )
N1 (x2 )

Nn (x2 )

N=

..
..

.
.
N1 (xngauss ) Nn (xngauss )

46 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

For a given element, Compute the weighted jacobian wk Jk = wk J(xk )


at each gauss point.
The mass matrix can now be computed as :
M = NT diag(w1 J1 , , wngauss Jngauss )N
Using matrix Algebra and therefore a BLAS implementation, will in
general be much faster than the direct loop.

47 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Sparse BLAS ??

Unfortunetely, this does not exist (yet?)


Some implementation of BLAS do offer some support.
Efficient Libraries for sparse linear algebra, use complex algorithm to
pack multiple operations in BLAS call.

48 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Section 4
Basic Linear Algebra and Matrix properties

49 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Space, subspace, span


I

A set of n vectors
{a1 , , an } in Rm
is linearly independent if

n
P

j aj = 0 implies j = 0 j

j=1
I
I
I

Otherwise, the set is linearly dependent.


A subspace of Rm is a subset that is also a vector space.
Given a collection of vector {a1 , , an } in Rm , the set of all
linear combinaisons of these vectors is a subspace called the
span of {a1 , , an }

span {a1 , , an } =

n
X

j=1

j aj : j R

50 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Range, Null Space

There are two important subspace associated to any mxn Matrix A


I

the range :
ran(A = {y Rm : y = Ax for some x Rn )}

the null space of A is defined by :


null(A = {x Rn : Ax = 0}

51 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Range and rank



let the aj be the columns of A. (A = a1 , , an ). Then we have :
ran(A) = span {a1 , , an }
The rank of a matrix A is defined by rank(A) = dim(ran(A))
It can be shown that rankA = rankAT .
A Matrix A Rm,n is rank deficient if rank(A) < min(m, n).
If A Rm,n , then dim(null(A)) + rank(A) = n

52 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Matrix transpose

Given a matrix A in Rm,n , its transpose, noted AT is the matrix in


Rn,m such as
yAx = xAT y x Rn y Rm

(1)

and we have:
Aij = ATji

53 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Symetric matrix

The following proposition are equivalent :


I

A matrix S is symetric if (Sx)T = xT S x.

A matrix S is symetric if ST = S

A matrix S is symetric if sij = sji

54 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Matrix conjugate transpose

ji
A matrix AH is the conjugate transpose of A if (AH )ij = a
Hermitian (selfadjoint) matrix : AH = A

55 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

determinant

A recursive definition of the determinant of a n, n matrix:


if A R1,1 ,det(A) = a11
else det(A) =

n
X

(1)j+1 det(A1,j )

j=1

where A1,j Rn1,n1 is obtained by removing the first


row and the j column from A
A1,j = A((2, , n), (1, , j 1, j + 1, n))
A is singular if det(A) = 0 otherwise A is regular.

56 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

determinant

Properties of the determinant :


det(AB) = det(BA)
det(AT ) = det(A)
det(A) = n det(A)
det(I) = 1
Characteristic polynomial : pA () = det(A I)

57 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Identity Matrix

The following proposition are equivalent :


I

the matrix I is the identity matrix Ix = x x

the matrix I is the identity matrix if Iij = ij

where ij is the kroenecker symbol :


ii = 1

(2)

ij = 0 if i 6= j

(3)

58 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Inverse matrix

If the inverse of A Rn,n exist, the inverse, noted A1 Rn,n is the


matrix such as AA1 = I
detA 6= 0 is a necessary and sufficient condition for A1 to exist.

59 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Positive Matrix

A matrix A is positive definite if xT Ax > 0 x 6= 0

60 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Positive Matrix

A matrix A is positive definite if xT Ax > 0 x 6= 0


symetric positive definite matrix are of special interest in
computational mechanics.
Factorisation for solving can be simplifyed.
Special Iterative solver are available.

60 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Orthogonal, unitary matrix

A matrix P is orthogonal if P1 = PT
A matrix Q is unitary if Q1 = QH

61 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Diagonal matrix

A matrix D is diagonal if only the diagonal term are not zero.


D = diag(a1 , , an ) is a diagonal matrix in Rn,n such as dii = ai
Inverse of a diagonal matrix
1
dii
= 0 if i 6= j

d1
ii =

(4)

d1
ij

(5)

Product of diagonal matrix are diagonal matrix:


diag(a1 , , an )diag(b1 , , bn ) = diag(a1 b1 , , an bn )

62 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

triangular matrix

Lower Triangular matrix


Lij = 0 if i < j

(6)

Upper Triangular matrix


Uij = 0 if i > j

(7)

63 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

properties of L and U matrix

L and U matrix have the following properties:


I

The product of two lower Triangular matrix is a lower triangular


matrix

The product of two upper Triangular matrix is a upper triangular


matrix

The inverse of a lower Triangular matrix is a lower triangular


matrix

The inverse of a upper Triangular matrix is a upper triangular


matrix

64 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

properties of L and U matrix

The determinant of a triangular matrix is the product of its diagonal


terms:
m
Q
detL =
Lkk
detU =

k=1
m
Q

Ukk

k=1

65 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Singular Value Decomposition

if A is real m n matrix, then there exist orthogonal matrices


U = [u1 , , um ] Rm,m

and

V = [v1 , , vn ] Rn,n

such that
UT AV = diag(1 , , p )

p = min{m, n}

where 1 2 p 0
The i are called the singular values.
The ui are the Left singular vector.
The vi are the Right singular vector.

66 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Singular Value Decomposition

lets define r as :
1 r > r +1 = = p = 0
r is the number of the last non-zero singular value. We the have :
rank(A) = r
null(A) = span{vr +1 , , vn }
ran(A) = span{u1 , , ur }

67 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Singular Value Decomposition


lets define r as :
1 r > r +1 = = p = 0
r is the number of the last non-zero singular value. We the have :
rank(A) = r
null(A) = span{vr +1 , , vn }
ran(A) = span{u1 , , ur }
This allow to define quasi or numerically singular matrix: if singular
value less than a fixed value , the matrix will be said numerically
singular.

67 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

vector norm

Vector norm : A vector norm is an application f from Rn to R that


satisfy the following properties :

f (x) 0
x Rn

f (x) = 0
x=0

f (x + y) f (x) + f (y)
(x), (y ) Rn

f (x) = ||f (x)


R, x Rn

68 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

vector norm

Notation : a norm i noted ||x||. The p-norm is defined by :


n
X
1
||x||p = (
|xi |p ) p

p1

i=0

69 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

vector norm
Notation : a norm i noted ||x||. The p-norm is defined by :
n
X
1
||x||p = (
|xi |p ) p

p1

i=0

1, 2, norms
||x||1

n
X

|xi |

i=0

||x||2

v
u n
p
uX
= t(
xi2 ) = (xT x)

Eulidian norm

i=0

||x||

= max |xi |
1in

69 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

vector norm and error


be an approximation of x.
Definition of the error. Let x
x||
abs = ||x
x||
||x
rel =
||x||
and x relative to norm
are the absolute and relative error between x
||.||.
Infinite norms can be related to the number of significant digit. For
example :
x||
||x
10p
||x||

means that there are approximately p correct significant digit in x

70 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

matrix norm

A matrix norm is an application from Rm,n to R With the following


proporties (the same as a vector norm) :

f (A) 0
A Rm,n

f (A) = 0
A=0

f (A + B) f (A) + f (B)
(A), (B) Rm,n

f (A) = ||f (A)


R, A Rm,n

71 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

matrix norm

Frobenius norm (generalisation of the euclidian norm)


v
u m n
uX X
||A||F = t
|aij |2
i=1 i=1

p-norm :

||A||p = sup
x6=0

||Ax||p
||x||p

72 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

matrix norm

Submultiplicative propertie of the p-norm :


||AB||p ||A||p ||B||p

73 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

matrix norm

Submultiplicative propertie of the p-norm :


||AB||p ||A||p ||B||p
Note that it is not the case for all
 norms
 ! For instance, take
1 1
||A|| = max|aij |, and A = B =
. We then have
1 1
||AB|| > ||A|| ||B||

73 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

matrix norm

Submultiplicative propertie of the p-norm :


||AB||p ||A||p ||B||p
Note that it is not the case for all
 norms
 ! For instance, take
1 1
||A|| = max|aij |, and A = B =
. We then have
1 1
||AB|| > ||A|| ||B||
Important propertie of the p-norm :
||Ax||p ||A||p ||x||p

73 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

some matrix norm properties

||A||1 = max

1jn

||A|| = max

1im

m
X
i=1
n
X

|aij |
|aij |

j=1

||A||1 is the maximum of the sum of each column.


||A|| is the maximum of the sum of each line.
1 and norms are easily computed.
||A(i1 : i2 , j1 : j2 )||p ||A||p
The p-norm of a sub matrix is less or equal to the p-norm of the
matrix.
74 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

The 2-norm is not as easy to compute. It can be shown that the the
2-norm of A is its largest singular value 1 . (equal to the largest
eigen value of AT A). In most case approximations can be used since
we have, for instance :

1
||A|| ||A||2 m||A||
n

1
||A||1 ||A||2 n||A||1
m

75 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

matrix norm and singular values.

||A||2F = 12 + + p2

p = min{m, n}

||A||2 = 1
min
x6=0

||A||2
= n
||x||2

76 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

condition number : sensitivity of a linear problem


let x be the solution of Ax = b lets perturb the system by F and f.
where  is small.
let x() be the solution of the perturbed system. We want to quantify
the variation of the solution of the system. (x() = x)
We have
(A + F)x() = b + f
Lets differenciate with regard to .

Fx() + (A + B)x()
=f
=0 = A1 (f Fx)
x()

77 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

condition number : sensitivity of a linear problem


Lets develop a Taylor serie for x:

x() = x + x(0)
+ O(2 )
Now we can put bounds to the effect of the perturbation on x:

||x(0)||
||x() x||

+ O(2 )
||x||
||x||
||f||
||||A1 ||(
+ ||F||) + O(2 )
||x||
We have Ax = b, therefore

1
||x||

||A||
||b||

and :



||F||
||f||
||x() x||
1
||A||||A || ||
+ ||
+ O(2 )
||x||
||A||
||b||

78 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

condition number : sensitivity of a linear problem

(A) = ||A||||A1 || is called the condition number of of (A)


relative to a matrix norm ||.||

79 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

condition number : sensitivity of a linear problem

(A) = ||A||||A1 || is called the condition number of of (A)


relative to a matrix norm ||.||
The previous equation show that the relative error in x (solution
of the linear system Ax = b) can be as big as (A) times the
relative error on A and the relative error on b.

79 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

condition number : sensitivity of a linear problem

(A) = ||A||||A1 || is called the condition number of of (A)


relative to a matrix norm ||.||
The previous equation show that the relative error in x (solution
of the linear system Ax = b) can be as big as (A) times the
relative error on A and the relative error on b.
The condition number quantifies the sensitivity of the Ax = b
problem.

79 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

In 2-norm, we have :
2 (A) = ||A||||A1 || =

1
n

In p-norm, we have:
||A||p
1
=
min
p (A) A+A singular ||A||p
The reciproque of p (A) is therefore the minimimum distance
between A and a singular matrix. The highest p (A), the closest is A
to a singular matrix.
When (A) is large, A is said to be ill-conditionned.
If (A) is small, A is well-conditionned.
In p-norm, the smallest is 1.

80 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Eigen value decomposition


Eigenvalues are the roots of the characteristic polynomial
p(z) = det(zI A)

81 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Eigen value decomposition


Eigenvalues are the roots of the characteristic polynomial
p(z) = det(zI A)
The set of roots (A) = 1 , , n of the characteristic polynomial is
the spectrum of A.

81 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Eigen value decomposition


Eigenvalues are the roots of the characteristic polynomial
p(z) = det(zI A)
The set of roots (A) = 1 , , n of the characteristic polynomial is
the spectrum of A.
we have detA = 1 2 n
n
P
P
and tr (A) =
aii =
i
i=1

i=1

81 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Eigen value decomposition


Eigenvalues are the roots of the characteristic polynomial
p(z) = det(zI A)
The set of roots (A) = 1 , , n of the characteristic polynomial is
the spectrum of A.
we have detA = 1 2 n
n
P
P
and tr (A) =
aii =
i
i=1

i=1

The vectors z, non zero that satisfy: Az = z are the eigenvectors.


An eigenvector defines a one dimensional subspace that is invariant
with respect to pre-multiplication by A

81 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

eigen value decomposition

Schur decomposition if A C n,n there exists a unitary Q C n,n such


that
QH AQ = T = D + N
where D = diag(1 , , n ) and N C n,n strictly upper triangular.
if A Rn is diagonalisable Then there exist
X1 AX = diag(1 , , n )

82 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Spectrum and spectral radius

The set of all eigenvalues of A is the spectrum of A We note it (A)


The spectral radius (A) is defined as :
(A) = max ||
(A)

83 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Link between singular value and eigen value


The Eigen value of the symetric matrix AT A are the squares of the
singular values of A
I

A symetric matrix has real eigen values.

A positive definite matrix have positive eigen value.

AT A is of course symmetric.
AT A is positive definite, since xT AT Ax = (Ax)T (Ax), which is the
square of the eulidian norm of (Ax).
Therefore AT A has positive real eigenvalue.
Has any matrix, A has a singular value decomposition :
A = Udiag(1 , , n )VT And we have
AT A = Vdiag(1 , , n )UT Udiag(1 , , n )VT
= Vdiag(12 , , n2 )VT

84 / 85

Introduction

Matrix Storage

Matrix Multiplication

Basic Linear Algebra and Matrix properties

Hopefully we (quickly) covered most of the matrix algebra that will be


needed in the rest of the class.

85 / 85

You might also like