Professional Documents
Culture Documents
Copper Slides
Copper Slides
1 / 21
Outline
Motivation
Hierarchical matrices
2 / 21
3 / 21
Random Field
Model unknowns as a Gaussian random field
E [s] = X
4 / 21
Bayesian viewpoint
v N (0, R)
where,
y
s
h(s)
:=
:=
:=
:=
p(y|s, )p(s, )
1
1
exp ks XkQ1 ky HskR1
2
2
5 / 21
Best Estimate
HX
0
= 0
+ QHT
s = X
Operation costs:
n
m
p
:
:
:
6 / 21
N
X
(xi , xj )vj
i, j = 1, . . . , N
j=1
7 / 21
N
X
(xi , xj )vj
i, j = 1, . . . , N
j=1
7 / 21
N
X
(xi , xj )vj
i, j = 1, . . . , N
j=1
7 / 21
1
, i = 1, . . . , N
Consider : [0, 1]2 R, for xi , yi = (i 1) N1
(x, y ) = exp(|x y |)
8 / 21
H-matrix formulation
Key features of Hierarchical matrices:
A hierarchical separation of space.
An acceptable tolerance that is specified.
Low-rank approximation of admissible sub-blocks.
Valid for asymptotically smooth kernels.
Definition
A cluster pair (, ) is considered admissible if
min{diam(X ), diam(X )} dist(X , X )
Definition
A kernel is called asymptotically smooth, if there exist constants c1as , c2as and a
real number g 0 such that for all multi-indices Nd0 it holds that
y K (x, y) c1as p!(c2as )p (|x y|)g p , p = ||
Arvind Saibaba Peter K. Kitanidis
9 / 21
H-matrix
10
HierMatrix
Direct
2
time (sec)
10
10
10
10
10
10
10
10
10
Figure: left: A typical H-matrix rank structure and right: Time for matrix vector
product for exponential covariance function = 106
10 / 21
Iterative solver
Krylov subspace methods for solving Ax = b, at the i-th iteration satisfy
ri span{r0 , Ar0 , A2 r0 , . . . , Ai1 r0 } = (A)r0 , Pi
where, ri = b Axi , is the residual at the i-th iteration.
Minimal residual methods such as MINRES or GMRES try to compute a
polynomial such that
kri k = min k(A)r0 k
Pi
HX
0
= 0
A is not constructed explicitly, rely only on matrix vector products. For eg.
z }| {
HQHT + R x = H Q |{z}
HT x +Rx
| {z }
11 / 21
piecewise H r
piecewise smooth
piecewise analytic
m c1 mr /d
m c2 mr for any r
>0
m c3 exp c4 m1/d
12 / 21
13 / 21
Q Vr r VTr
1/2
M = R1/2 HVr r
T
M = UV
(MMT + I)1 = I UDr UT
1 = R1 R1/2 UDr UT R1/2
O(rkm log m)
O(r + r )
O(nr 2 )
O(r )
O(nr )
14 / 21
Theorem (Bauer-Fike)
Let A be a diagonalizable matrix, and V be the non-singular eigenvector matrix
such that A = VV1 . If is an eigenvalue of A + A, then an eigenvalue
(A) exists such that
| | cond(V)kAk
1 , we have
Applying this result to the matrix A =
1 )| r kQkkHk2 k
1 k
|1 (
gives us an explicit bound on the spectral radius of the matrix.
15 / 21
Forward problem
u
v.u + D2 u
t
u
u
[0, T ]
0,
u0 ,
{t = 0}
D [0, T ]
Measurement Operator
h(s) = Hs = |{z}
H
Sensors
A1
|{z}
Forward Propagation
T
|{z}
Prolongation
16 / 21
Reconstruction
17 / 21
10 10
12 12
Unknowns
100 100
200 200
300 300
100 100
200 200
300 300
100 100
200 200
300 300
Iterations
30
32
33
38
39
41
136
196
200
r = kx yk
Rel. Err
0.0941
0.0949
0.0953
0.0669
0.0675
0.0679
0.0495
0.0503
0.0500
Table: The performance of the iterative scheme for the contaminant source
identification problem. In each case the number of time measurements were nt = 10,
with L = T = 1 and t = 0.05. For the preconditioner, we used r = 100.
18 / 21
Iterations
300
196
96
41
r
2.45 105
4.76 106
1.75 106
7.09 107
r = kx yk
Rel. err.
0.0148
0.0146
0.0145
0.0144
Table: Performance of iterative scheme with increasing r for grid size 100 100 and
number of sensors 25 25, so that number of measurements are 6, 250. . indicates
that it reached the maximum number of iterations 300, without converging to the
desired solver tolerance.
19 / 21
10
10
m=400
m=1600
m=10000
m=400
m=1600
m=10000
2
10
| |
# | 1| > x
10
10
10
10
10
10
10
10
10
6
10
10
10
x = | 1|
10
10
20 / 21
Conclusions
Our contributions
A scalable matrix-free approach to solving linear inverse problems.
A preconditioner that is expensive to compute but performs well.
Quantifying uncertainty - generating conditional realizations.
Unconditional realizations using Chebyshev matrix polynomials.
21 / 21