Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

Inverse Theory Applications in

Petrophysics
K.C. Hari Kumar
ONGC, Baroda.

2013

Presentation Overview

Inverse Theory Application

SVD and Anatomy of Inverse Problem

Regularization Methods

Interpretation

Issues and Challenges

ONGC a Wealth Creator

Data Inversion

Role of numerical methods in Petrophysics


The down-hole measurements (N number) make up the data
used for volumetric parameter estimation

data, d = [d1, d2, dN]T

Earth model in terms of mineral volumes is to be retrieved

model parameters, m = [m1, m2, mM]T

Physical theory or quantitative model to predict the data is the


forward problem
Inverse theory aims at estimating an earth model from the data

ONGC a Wealth Creator

Forward Theory

est

Quantitative Model

estimates

pre

predictions

Inverse Theory

est

estimates

Quantitative Model

obs

observations

Play of Errors

dpre

mest

dobs
mtrue

Observational errors
Error propagation

Understanding the effects of observational error is central to


Inverse Theory

Down-hole measurements
Tool response at a depth point di is a composite function of an array
of formation properties and other parameters (gi):
=
,
,

T (di), the log values at depth di represent a measurement averaged


over an earth volume surrounding the point of measurement and the
collective signal from various parts of the borehole.

The surrounding formation is represented by the convolution integral


of the form:
!

360 0

= "" " #
0

! ; %, &,

%, &, % &

The kernel K (di; x, r, ) accounts for the geometrical effects included


in integration to ensure that the signal belongs to the formation in the
vicinity of the depth point: x, r, are the cylindrical coordinates and g
(x, r, ) is the geophysical property distribution of the probe sensitive
volume.

Well Log Data Inversion


The integral equation is converted into a set of m linear
algebraic equations in n unknowns (mn or m n)
represented as +, = - .

Tools
RHOB
NPHI
DT
GR
VOLSUM

Quartz

Calcite

Kaolinite

Muscovite

Water

Volumes

Tool Data

2.65

2.71

2.41

2.79

0.41

2.196

-0.06

-0.02

0.37

0.25

0.07

0.318

55.5

48

120

55

189

0.22

102.515

12

105

270

0.05

41.520

0.25

1.000

A is the response matrix derived of the parameter values of


each tool for 100% of each formation component, x the
volume vector of formation components and b the data vector
constituted of the different tool measurements.
8

The Inverse Problem: x = A-1b


The linear equations Ax = b are solved under constrained
conditions of m = n subject to appropriate handling of the illconditioned system. One of the earliest demonstrations of the
method may be seen with Doveton (1986). Sensitivity of A-1
makes the solution unstable for noise and round off error.
Names

x
Volumes

A-1

Inverse Operator

Tool Data

Quartz

0.41

-4.519

-10.642

0.022

0.011

10.968

2.196

Calcite

0.07

3.827

9.567

-0.030

-0.014

-7.745

0.318

Kaolinite

0.22

2.174

0.494

0.023

-0.002

-6.976

102.515

Muscovite

0.05

-0.645

0.281

-0.010

0.004

2.225

41.520

Water

0.25

-0.838

0.300

-0.005

0.000

2.528

1.000

Geophysical inverse problems are ill-posed as the solutions


are either non-unique or unstable or both.
Hadamard (1902) Existence, Uniqueness and Instability
9

The Challenge arising from Ax = b


In general the model that one seeks is a continuous function
of the space variables with infinitely many degrees of freedom.
On the other hand, the data space is discrete and always of
finite dimension because any real experiment can only result
in a finite number of measurements.
A simple count of variables shows that the mapping from the
data to a model cannot be unique; or equivalently, there must
be elements of the model space that have no influence on the
data.
This lack of uniqueness is apparent even for problems
involving idealized, noise-free measurements. The problem
only becomes worse when the uncertainties of real
measurements are taken into account.
No guarantee that Ax = b contains enough information for
unique estimate x and ascertaining of the sufficiency of
information devolves upon the discrete inverse theory.
10

Model Space and Data Space

Earth is a model space of infinite degree of freedom


Physics of the experiment decides the model and finite data
11

Linear model from earth

./012 0/ 3456 7 +

8
8
8

92:;</ 9/=526 7

>
64?
5

@22: 9/=526 -

Earth model and the linear Ax = b formulation: A is also known


as the Sensitivity matrix which contains measurements (100%
volumes of minerals) corresponding to the end members.
Volume vector has the obvious constraint that the sum is equal
to 1.
Tool vector consists of the different measurements or data
inverted.
12

Generalized Inverse = Least Squares

Variance reduced to unity

13

Variance
1.5%

77.97

79.73

70.91

82.09

29.42

0.41

66.67

-14.16

-4.72

87.32

59.00 235.99

0.09

66.67

38.14

32.98

82.46

37.79 129.88

0.22

66.67

17.05

0.00

149.19 383.63

0.00

0.07

66.67

66.67

66.67

66.67

66.67

0.21

66.67

66.67

x
0.0538

-0.0424 -0.0276

0.0084

0.0103

12377.58

0.41

-0.0424

0.0340

0.0204

-0.0061 -0.0077

11644.29

0.09

-0.0276

0.0204

0.0175

-0.0056 -0.0063

30436.01

0.22

0.0084

-0.0061 -0.0056

0.0018

0.0020

41945.12

0.07

0.0103

-0.0077 -0.0063

0.0020

0.0023

30796.89

0.21

Given the assumptions of normal distribution of errors which


are uncorrelated, the L-2 norm is employed to characterize the
solution vector x. The Generalized inverse gives the minimum
length solution, m > n and becomes the maximum likelihood
solution when m = n.
Method fails when (ATA) has no inverse
14

Parameter Covariance Matrix A = B +C +


Model Uncertainty % & Correlation of xi
23.19
-0.04
-0.03
0.01
0.01

-0.04
18.45
0.02
-0.01
-0.01

-0.03
0.02
13.23
-0.01
-0.01

0.01
-0.01
-0.01
4.25
0.00

0.01
-0.01
-0.01
0.00
4.80

DE

x
0.41
0.09
0.22
0.07
0.21

The limits or the uncertainty of the model parameters is


estimated from A = B +C + DE . When B =1, the Hessian
inverse operator +C + DE is the model variance-covariance
matrix with the diagonal elements as the variance.
Uncertainty in the estimated model parameters xi is obtained
as the square root of the variance converted into percentage.

15

Example 2
Tool
RHOB
SGR
TH/K
TNPH
PEF
DT
SUM

QUAR

SM1

SMEC

ILLI

KAOL

CHLO

STD
Error

2.65
5
3
0.04
2
53.5
1

2.35
150
10
0.4
2
60
1

2.12
180
12
0.44
2.04
60
1

2.53
12
3.5
0.3
3.45
87
1

2.42
44
14
0.37
1.83
77
1

2.77
2.04
16
0.52
6.37
100
1

1.09
0
0
1
0.8
189
1

0.35
0.05
0.2
0.09
0.11
0.08
0.12

2.3153

0.035

51.333

0.770

7.085

0.106

0.3513

0.005

2.3254

0.035

80.705

1.211

0.015

Model Uncertainty % & Correlation of xi

16

46.82

0.06

-0.01

-0.29

-0.13

0.11

0.05

0.35

0.06

24.75

-0.04

-0.09

-0.04

0.03

0.02

0.05

-0.01

-0.04

16.86

0.02

0.01

-0.01

0.00

0.20

-0.29

-0.09

0.02

63.24

0.18

-0.14

-0.07

0.09

-0.13

-0.04

0.01

0.18

28.18

-0.06

-0.03

0.11

0.11

0.03

-0.01

-0.14

-0.06

22.56

0.03

0.08

0.05

0.02

0.00

-0.07

-0.03

0.03

11.29

0.12

Anatomy of Inversion

Singular Value Decomposition


The ill-posed character of the discrete problem i.e. the
structure and numerical behavior of the response matrix A
and the modes of linear transformation can be understood
with the help of singular value decomposition (SVD).
The operator matrix A can be resolved into three component
matrices in such a way that K

F = G8 = H I! ! J!
!=1

U and V are the orthogonal matrices of left and right


singular vectors representing the model space and the data
space while is a diagonal matrix of singular values i which
are amplitudes of the mode of transformation.
@
I
L
D
J
,=F -=H

18

SVD of Operator A
2.65
2.71 2.41 2.79 1
0.06 0.02 0.37 0.25 1
A = , 55.5
120 55 189 = UVT
48
0
105 270 1
12
1
1
1
1
1

Left Singular Vectors: U


-0.01336 0.00499 0.94988 -0.09353 -0.29796
-0.00216 0.00364 -0.12703 0.75599 -0.64212
-0.54151 0.84058 -0.01229 -0.00658 0.00126
-0.84056 -0.54164 -0.00878 -0.00058 0.00081
-0.00566 0.00454 0.28525 0.64783 0.70633

Right Singular Vectors (Transpose) VT


-0.12569 0.1998
0.61649 -0.09546 0.74506
-0.08144 0.20077 0.69249 0.34282 -0.59665
-0.47946 0.21892 0.04011 -0.80287 -0.27565
-0.80321 -0.49734 -0.0431 0.31647 0.07408
-0.32021 0.79025 -0.37004 0.35862
0.0862
19

Singular Value Spectrum:


319.69

201.053

3.281

0.185

0.047

Implications of the SVD


Important insights offered by the SVD in the present case are:
Condition number can be derived as the ratio of the largest and
the least singular values to be 6834.
Ill-conditioning indicated by Cond(A) finds support also from
the elements of the left and right singular vectors u, and v,
which tend to be more oscillatory i.e. show more sign changes
as the index i increases or the corresponding singular values
decreases.
A small singular value i as compared to 1 = ||A||2, point
towards the existence of a certain linear combination of the
columns of A spanned by the elements of the right singular
vector vi such that ||Avi||2= i is small. Similar argument holds
for the left singular vector ui and the rows of A. Small singular
values therefore are indicative that the operator is nearly rank
deficient and that the associated ui and vi are effectively the
numerical null vectors of AT and A respectively.
20

Impact of Truncation
The accumulative contribution ratio for the possible truncation
stages is expressed as:
Modes
Normalized
TU =

UM
M

Modes

1
2
3
4
5

319.69
201.053
3.281
0.185
0.047

%
60.98
38.35
0.63
0.04
0.01

Cumulative
60.98
99.33
99.96
99.99
100.00

i %
100.0
62.9
1.0
0.1
0.0

-0.013

0.005

0.95

-0.094

-0.298

-0.002

0.004

-0.127

0.756

-0.642

-0.542

0.841

-0.012

-0.007

0.001

-0.841

-0.542

-0.009

-0.001

0.001

-0.006

0.005

0.285

0.648

0.706

-0.126
-0.081
-0.479
-0.803
-0.32

0.2
0.201
0.219
-0.497
0.79

0.616
0.692
0.04
-0.043
-0.37

-0.095
0.343
-0.803
0.316
0.359

0.745
-0.597
-0.276
0.074
0.086

99.33 % of the total variance is contributed by the 1st and 2nd modes
u1 and v1 have no sign changes at all and sign changes increases with
other columns. 3 = 1.02% of 1while and 4 and 5 are abysmally
lower and suggests a rank 3 approximation
21

Rank 3 Approximation
F = G 8@

% = 8 W G @ L

Original A

2.65
-0.06
55.5
12
1

2.71
-0.02
48
0
1

2.41
0.37
120
105
1

2.79
0.25
55
270
1

Data Vector b for x


x
b
0.41
2.196
0.07
0.318
0.22
102.515
0.05
41.520
0.25
1.000

Rank 3 Reconstruction A3
1
1
189
0
1

2.66
-0.02
55.50
12.00
0.99

2.71
-0.09
48.00
0.00
0.98

2.39
2.80
1.01
0.47
0.21
0.95
120.00 55.00 189.00
105.00 270.00 0.00
1.10
0.96
0.95

x
0.232
0.236
0.214
0.060
0.261

Solution x: Truncated SVD Solutions


Rank 5
Rank 4
Rank 3
Rank 2
0.41
0.24
0.23
0.10
0.07
0.21
0.24
0.09
0.22
0.28
0.21
0.20
0.05
0.03
0.06
0.07
0.25
0.23
0.26
0.34

The minimum norm solution x obtained is [0.23, 0.24, 0.21,


0.06, 0.26] Reconstructed kernel A3 = A with condition number
97 against the 6834 for A.
22

Model Resolution (V5V5T = I5 )


Model Resolution V4V4T (Rank 4)
0.44
0.44
0.21
-0.06
-0.06
0.44
0.64
-0.16
0.04
0.05
0.21
-0.16
0.92
0.02
0.02
-0.06
0.04
0.02
0.99
-0.01
-0.06
0.05
0.02
-0.01
0.99

Model Resolution V3V3T (Rank 3)


0.44
0.48
0.13
-0.02
-0.03
0.48
0.53
0.11
-0.06
-0.07
0.13
0.11
0.28
0.27
0.31
-0.02
-0.06
0.27
0.89
-0.12
-0.03
-0.07
0.31
-0.12
0.86

Diagonal elements of the resolution matrix equal to 1 is


indicative of good resolution Unique solution
Non-zero elements other than the diagonal ones in a row
indicate that the predicted value of the parameter is a weighted
average of the observed data.
Considering the 1strow for example, [0.44, 0.44, 0.21, -0.06, 0.06], the first element of the predicted data vector bp = [b1P, b2P,
b3P, b4P, b5P] viz., b1P will be predicted as a weighted average of
the observed data d = [d1, d2, d3, d4, d5]. With d = [2.1936,
0.3225, 102.12, 40.38, 0.99], bp = [2.128, 0.326, 103.141,
39.572, 0.99]
23Note: 0.44*2.196+0.44*2.196+0.21*2.196-0.06*2.196-0.06*2.196 = 2.128

Data Resolution
Data Resolution U4U4T(Rank 4)
0.91
-0.19
0.00
0.00
-0.19
0.59
0.00
0.00
0.00
0.00
1.00
0.00
0.00
0.00
0.00
1.00
0.21
0.45
0.00
0.00

0.21
0.45
0.00
0.00
0.50

Data Resolution U3U3T(Rank 3)


0.90
-0.12
0.00
0.00
-0.12
0.02
0.01
0.00
0.00
0.01
1.00
0.00
0.00
0.00
0.00
1.00
0.27
-0.04
0.00
0.00

0.27
-0.04
0.00
0.00
0.08

Data resolution matrix presents the information density i.e.


which data contribute independent information to the
solution. Value of 1 for the diagonal element indicates
information independent of other observations. U3U3T I3
shows that there exists a data null space and the model fit for
the data shall be poor.
These results are in agreement with the ill-conditioned
behavior of the original design matrix A which had a lesser
number of linearly independent vectors than those indicated
by the rank of 5.
24

SVD Conclusions
Modes

1
2
3
4
5

319.69
201.053
3.281
0.185
0.047

Modes
%
60.98
38.35
0.63
0.04
0.01

Cumulative
60.98
99.33
99.96
99.99
100.00

Normalized
i %
100.0
62.9
1.0
0.1
0.0

The three energetic model factors of the activated space explains


99.96% of the variance in the data. The first function u1 and v1
contribute 66.98% followed by u2 and v2 38.35 and u3 and v3
adding 0.63%. Vk represents the optimized distribution of row
vectors of the model space and Uk represents the optimized
distribution of column vectors of the data space.
Singular value decomposition thus enables an objective ranking
of uncorrelated modes of variability or latent variables which
helps to differentiate the data from noise.
25

Impact of Noise
The original kernel A5x5 may be viewed as an A3x3 data kernel
perturbed by a noise matrix of norm . Such a perturbation shall
change the zero singular values by and therefore any singular
values < stand the chance of being contributed by the noise.
R(A) = 5 needs to be contrasted with the scenario R(A,) = 5 by
examining the norm of the data error possible with the kernel A.
For any of the variables in the kernel A i.e. columns, the standard
error is 1.5% and for the set of observations considered viz., RHOB,
NPHI, DT, GR and VOL_SUM, the likely norm of the error may be
computed from a likely data vector, say, [2.196, 0.318, 102.515,
41.52, 1] for which the errors will be [0.033, 0.005, 1.538, 0.623,
0.015], the norm of the error shall be 1.66.
If we consider the balanced uncertainties given by proprietors of
certain software tools, the noise vector shall be [0.027, 0.015, 2.25, 6,
1.5] but the value for GR (=6) is almost unpredictable and inclusion
may lead to a high value of error norm. Excluding GR, we obtain
2.7 and hence by any count the low singular values we seen above, 4
= 0.185 and 5 = 0.047 cannot be treated as causing a genuine
addition to the range of the problem.
26

Truth of the linear formulation Ax = b


The truncated solutions, theoretically free from noise, differ
much from the true model xtrue used for deriving the data
vector b. Rank-4 and Rank-3 solutions are nearly the same
but significantly different from xtrue.
Here a question arises: The linear formulation has
sufficient information content as to permit the retrieval
of the true model?
Each column of A represented an end-member configuration
realizable as data in 2 decimal digits like 2.65, 0.35 etc. In
other words, the forward problem is expressed in terms of 2
decimal digits multiplied by a fractional volume vector to yield
data of maximum accuracy up to 1 decimal digit only. Given
such a forward problem, can the measurements be of
information content as to retrieve the volume vector in 2
decimal digits?

27

Round-off Error
b
2.196
0.318
102.515
41.52
1

Rank-5
x5
0.41
0.07
0.22
0.05
0.25

Rank-4

b*

x5*

2.22
0.3
103.51
42.5
1

0.53
-0.05
0.28
0.02
0.22

-L2

0.186

Rank-3

x4

x4*

0.240
0.206
0.282
0.033
0.231

0.253
0.165
0.385
-0.003
0.188

-L2

0.125

x3
0.232
0.236
0.214
0.060
0.261

x3*

-L2

0.253
0.165
0.385 0.005
-0.003
0.188

Rank-4 and Rank-3 operators have not contributed much to


the stability of the solution as may be noted from the above
data.
Constraints are needed to ensure positive values and to make
the solution agree with prior information about the solution x,
volume vector.
Situation demands one or the other of the known regularization
methods depending upon their suitability adjudged domain
wise
28

Model Space

Application of regularization methods is problem specific. Advantage


of one method over the other to address instability due to rank
deficiency in specific kind of problems is missing in the literature.
Tikhonov regularization can be found to be quite popular in its
application
Common for all these regularization methods is that they replace the
ill-posed problem with a nearby well-posed problem which is much
less sensitive to perturbations Per Christian Hansen
Given the rank deficient problem of comparatively small size kernel
we have, Tikhonov regularization is one of the best options
29

Tikhonov Regularization
Equations of the kind, Ax = b do not yield the right numerical
solution when the data b contains noise. If it is known that the
given data satisfies an error estimate L L X , Tikhonov
states that an approximate solution can be found by
minimizing the regularization functional:

C ,, - = , =

+, -E

B
YB

Z B Y,

B
YB [\] ,

^_

L is typically the identity matrix or a well conditioned discrete


approximation to some derivative operator. In operator
notation:

, =

30

++C

Z `

DE C
+ -,

a^_

Tikhonov continued
!1

d2 Z
c !1
0
% = 8 c
c
c
0
b

2! Z
0

&

g
f
f G L
0
f
!Z1 f
2!Z1 Z e

&
!1
0
0 g
k d 2
u

i
i
f
i c !1
i
!
Zl
0 f G lm Ino&!pm qKJm&rm F
8c 0
2

!
j c
t
f

!Z1 f
i
i
i c 0
i
0
2

e
h b
s
!Z1

k
i
i

d
c!
c
0, FZl FZ : 8 c 0
j
c
i
c0
i
b
h
31

!
0

&

u
0g
i
f
i
f
0 f G wmKm&on!pm qKJm&rm FZ
t
f
1f
i
i
! e
s

Tikhonov Example-1
b
2.196
0.318
102.515
41.52
1
L2 norm

xtrue
0.41
0.07
0.22
0.05
0.25
0.54

=0.001

0.36
0.11
0.24
0.05
0.24
0.51

0.05
-0.04
-0.02
0.00
0.01
0.07

=1
0.22
0.22
0.22
0.06
0.27
0.47

0.19
-0.15
0.00
-0.01
-0.02
0.24

=5
0.19
0.19
0.21
0.06
0.29
0.45

0.22
-0.12
0.01
-0.01
-0.04
0.25

Application of a small value for such as =1, in fact gave a


solution closer to the truncated A3 solution. Minimum norm
solution happens for = 0 and this is not an expected behavior
and may be the consequence of design or the use of
hypothetical data.
Petro-physicists are in need of the mathematicians help to
interpret the results
32

Tikhonov Example-2
b
2.401
0.412
115.260
36.749
1.0
F% L

% = FW L
%
=0

-1.29
1.44
1.01
-0.20
0.04
4.8

%
%
F% L
F% L
= x
= y
0.20
-0.034
0.24
-0.046
0.00
-0.040
0.26
-0.032
0.34
0.000
0.27
0.000
-0.01
0.000
0.02
0.000
0.26
0.082
0.29
0.090
0.31
0.0096
0.29
0.0111

%
%
F% L
=1
=
0.24
-0.096
0.22
0.25
-0.022
0.23
0.24
-0.001
0.23
0.03
0.001
0.04
0.31
0.079
0.33
0.28
0.0160
0.26

F% L
-0.206
-0.007
-0.003
0.004
0.047
0.0446

L - Curve

33

L= 5 = 0.046

IIxII2

The L-curve is a plot of the size


of the regularized solution
versus
the
size
of
the
corresponding residual for all
valid regularization parameters.
L-curve does not depict a
distinct elbow but the point of
maximum curvature can be
identified to be between = 4
and = 1.

L= 4 = 0.185
L= 1

L= 3 = 3.28

IIAx-bII2

Validity of the Regularized solution


Considering the few solutions which are likely to be valid, for = 4 =
0.18, =1 and = 2, it can be seen that for the log data vector used,
the porosity values %x had been on the increase as increased i.e.
0.29, 0.31, 0.32 which raises a question mark on the validity of the
regularization as the log values cannot be of so high a porosity value.
%
%
%
F% L
F% L
F% L
b
= y
=1
=2
2.401
0.24
-0.046
0.24
-0.096
0.23
-0.149
0.412
0.26
-0.032
0.25
-0.022
0.24
-0.014
115.260
0.27
0.000
0.24
-0.001
0.24
-0.002
36.749
0.02
0.000
0.03
0.001
0.03
0.002
1.0
0.29
0.090
0.31
0.079
0.32
0.064
0.29
0.0111
0.28
0.0160
F% L
0.27
0.0265
x51 =1(x11+..x41)

0.21

0.24

0.26

With the unity constraint imposed and the porosity value is derived as
x51 =1-(x11+..x41), the value for porosity appears to be a distorted value
exceeding 0.20, viz., 0.24 when = 1 and 0.26 when = 2. For a matrix
density of 2.65, the porosity could have been only around 0.15.
34

Tikhonov Regularization with prior


z % = o& {!K

F% L

, = B YC Y Z +C +

Prior
= 0.23
xprior
x
ax-b
0.46
0.40 -0.033
0.07
0.11 -0.057
0.22
0.32 0.000
0.05
0.00 0.000
0.20
0.27 0.085
Norms ||x||2 ||ax-b||2
0.31 0.3418 0.0116
x
1.09

DE

= 0.30
x
ax-b
0.43 -0.033
0.09 -0.056
0.29 0.000
0.00 0.000
0.27 0.088
||x||2 ||ax-b||2
0.3511 0.0121
1.09

Z | % %16 26

B YC Y,}~\~ Z +C

= 1.0
x
ax-b
0.46 -0.028
0.07 -0.054
0.25 -0.001
0.02 0.000
0.29 0.096
||x||2 ||ax-b||2
0.3672 0.0129
1.10

= 5
x
ax-b
0.48 0.027
0.08 -0.061
0.25 -0.012
0.02 0.005
0.28 0.113
||x||2 ||ax-b||2
0.3775 0.0174
1.11

= 15
x
ax-b
0.48 0.049
0.09 -0.064
0.25 -0.098
0.02 0.046
0.28 0.119
||x||2 ||ax-b||2
0.3810 0.0325
1.12

x51had been on the increase as increased and the unity constraint was
not adhered to in the above exercise of regularization. With the xprior
initially assumed for deriving b, the minimum norm solution and
minimum residual norm happens for = 0
35

Prior with Unity Constraint


Unity constraint may be enforced by deriving x51 = 1SUM(x11+x21+ x31+x41) or by altering the algorithm to solve for
only the (n-1) elements and deriving the nth element as 1
minus sum of (n-1) elements.
Volumes
x11
x21
x31
x41
x51 =1-(x11+..x41)

= 0.23
0.40
0.11
0.32
0.00
0.18

= 0.30
0.43
0.09
0.29
0.00
0.19

=1
0.46
0.07
0.25
0.02
0.19

= 5
0.48
0.08
0.25
0.02
0.17

= 15
0.48
0.09
0.25
0.02
0.16

No Prior
0.24
0.25
0.24
0.03
0.23

With the unity constraint imposed, the porosity values have


become reasonable but when the same is contrasted with the
no-prior solution, it becomes apparent that the linear
formulation without prior values cannot return even a
reasonably true model
36

L-Curve in the above case

IIxII2

L - Curve

IIAx-bII2
No distinct elbow is seen and in the following slide an
alternative prior vector is used to study the issue further
37

Alternate prior
Prior
2
0.35
0.15
0.22
0.08
0.20
||x||2
0.24
x51 =1-(x11+..x41)

= 0.6
x
ax-b
0.55 0.011
0.00 -0.071
0.28 0.000
0.00 0.000
0.27 0.103
||x||2 ||ax-b||2
0.4536 0.0159
0.17
1.10

=1
x
ax-b
0.36 -0.030
0.16 -0.043
0.25 -0.001
0.02 0.000
0.30 0.096
||x||2 ||ax-b||2
0.3094 0.0120
0.20
1.10

= 2
x
ax-b
0.26 -0.180
0.21 -0.013
0.24 -0.005
0.03 0.003
0.32 0.054
||x||2 ||ax-b||2
0.2690 0.0356
0.27
1.05

=3
x
ax-b
0.21 -0.33
0.19
0.01
0.23 -0.01
0.04
0.01
0.34
0.01
||x||2 ||ax-b||2
0.2521 0.1088
0.33
1.01

=5
x
ax-b
0.17 -0.53
0.15
0.04
0.23 -0.04
0.04
0.01
0.37 -0.05
||x||2 ||ax-b||2
0.2378 0.2831
0.42
0.95

In this case it is seen that the prior without the unity


constraint fails to retrieve a reasonable solution
With = 1 and the sum of volumes =1 constraint, the prior is
almost reproduced.
Can such use of priors have any meaning or lead to a valid
technique when Ax= b lacks the information?
38

L-Curve in the above case

IIxII2

L - Curve

IIAx-bII2
L-Curve depicts a distinct elbow and = 1 becomes an
obvious choice as a regularization parameter
Interpretation has to be specific to the problem
39

Discrete Picard Condition


K

The SVD solution for x is be expressed as:

I! L
%= F L=H
J!
!
Z

!=1

Discrete Picard condition states that the numerator (I@ L)


must decay faster than the denominator (i). But the
literature presents contradictory opinions on the trust
that can be placed on the discrete Picard condition. Due
to noise in A or other reasons of design of the inverse
model DPC is found to get violated.
May be it is not applicable to the small size problem
discussed above but why it is so remains to be discussed
in applied mathematics literature.
40

Discrete Picard Condition


The DPC for A and A scaled to remove the units AUF
AUF

A
Fourier
Coefficients
I@ L

Singular
Values

I@ L

90.45
63.70
0.71
0.02
0.01

319.69
201.05
3.28
0.18
0.05

0.28
0.32
0.22
0.09
0.23

Fourier
Coefficient
s
I@ L
118.73
82.20
37.13
1.67
0.67

Singular
Values

I@ L

506.15
259.24
140.34
14.06
3.12

0.23
0.32
0.26
0.12
0.21

Discrete Picard condition is apparently not satisfied. Is this


relevant to the small size inversion problems?
41

Picard Plot-1

Data of a 7x7 example is used here to illustrate the failure of


the DPC. But what it means physically to the problem?

42

Impact of Weights
2.65
-0.06
55.5
12
1

Kernel A
2.71 2.41 2.79
-0.02 0.37 0.25
48 120 55
0 105 270
1
1
1

Weight Matrix
1 0.555 0
0
0
0
1
0 1.00
0
0
0
189 0
0 0.0051
0
0
0
0
0
0
0.0007 0
1
0
0
0
0
0.01

Modified A = Aw
1.47 2.71 0.01 0.00
-0.03 -0.02 0.00 0.00
30.83 48.00 0.62 0.04
6.67 0.00 0.54 0.20
0.56 1.00 0.01 0.00

0.01
0.01
1.89
0.00
0.01

Aw = U
VT
U

VT

-0.054 -0.043 0.949 -0.128 -0.280 57.295 0.000 0.000 0.000 0.000 -0.545 -0.838 -0.011 -0.001 -0.033
0.001 -0.003 -0.106 -0.990 0.093 0.000 5.630 0.000 0.000 0.000

0.834 -0.543 0.088 0.033 -0.021

-0.996 -0.061 -0.060 0.006 -0.004 0.000 0.000 0.102 0.000 0.000

0.000 0.039 -0.014 0.075 -0.996

-0.064 0.997 0.041 -0.007 0.001 0.000 0.000 0.000 0.004 0.000

0.085 -0.042 -0.914 -0.394 -0.019

-0.020 -0.014 0.288 0.059 0.955 0.000 0.000 0.000 0.000 0.00027 0.006 -0.002 -0.395 0.915 0.074

1. Instantly it strikes that the scaled matrix presents a far


greater condition number and ill-conditioning than the
original matrix.
2. Vector u1 and v1 depict increased oscillation
43

Modes and variances


Modes

1
2
3
4
5

57.295
5.6303
0.1016
0.004
0.0003

Modes
%
90.899
8.9326
0.1612
0.0064
0.0004

Normalized
Cumulative
i %
90.899
100
99.832
9.83
99.993
0.18
100
0.01
100
0.00

1
10
564
14235
215394

Number of energetic modes has been reduced when


contrasted with SVD of A.
Plot of the singular values as shown has a distinct elbow
which calls for a rank 2 approximation in contrast to the rank
3 approximation of A.
Number of manifest variables has become 2 instead of 3 in A
and the singular values right of the elbow are discarded as of
noise.
44

Diagnosis for a valid solution


A number of issues arise against the background of the
above discussion:
1.
2.

3.
4.

45

Where can we find the inversion theory applicable for the


small scale problems of the above kind?
Applied Mathematics literature is silent about diagnosis in
respect of (a) information content of the linear formulation
(b) SVD analysis of the small size problems (c) Discussions
on the weights and scaling of the problems (d)
Regularization (e) Discrete Picard condition
Can the iterative procedures do any magic when direct
solutions are not valid?
Levenberg-Marquardt algorithm can offer something extra
in an iterative process?

Levenberg-Marquardt algorithm
% W = % Z aq

% W = % Z a[ !o ]

2n
T
T
= A AA Z 2 I
m

2n
T
= A A Z 2 I
m

AT

Where = F@ F & F@ F is the Hessian with which weighted


least squares method is sought to be implemented.
H will be more ill-conditioned than A and hence a positive
constant times the diagonal elements of H are added to
ensure greater eigen values *diag (H).
Direction of error decides the tuning of up and down to
minimize the error and Marquardts innovation had helped to
take a large step in the direction with low curvature and a
small step when the update reaches a direction with high
curvature.
Equivalence is drawn between the stochastic inverse A+s and
the LM algorithm
46

Example for Uncertainty vs


RHOB
SGR
TH/K
TNPH
PEF
DT
SUM

x
0.4
0.05
0.07
0.09
0.2
0.05
0.14

47

QUAR

SM1

SMEC

ILLI

KAOL

CHLO

2.65
5
3
0.04
2
53.5
1

2.35
150
10
0.4
2
60
1

2.12
180
12
0.44
2.04
60
1

2.53
12
3.5
0.3
3.45
87
1

2.42
44
14
0.37
1.83
77
1

2.77
2.04
16
0.52
6.37
100
1

1.09
0
0
1
0.8
189
1

2.3287

0.034931

638.77

32.082

0.48123

334.89

6.455

0.096825

192.39

0.3338

0.005007

89.15

2.1498

0.032247

64.75

83.29

1.24935

3.82

0.015

1.12

=0
Uncert.
%
47.06
24.81
16.84
63.58
28.33
22.68
11.33

=1
7
Uncert.
x
%
0.37 26.54
0.04 20.20
0.07 15.58
0.13 35.51
0.22 15.87
0.04 12.68
0.13
6.37

x
0.35
0.05
0.06
0.16
0.23
0.03
0.13

=2
Uncert.
%
11.91
15.80
13.01
15.30
6.97
5.50
2.82

x
0.34
0.05
0.06
0.17
0.23
0.02
0.13

=3
Uncert.
%
6.58
12.26
10.25
7.89
3.77
2.90
1.55

=3.82=
6
Uncert.
x
%
0.33
4.55
0.06
9.86
0.05
8.28
0.17
5.14
0.24
2.65
0.02
1.97
0.12
1.09

b*0.015

x
0.33
0.07
0.05
0.17
0.24
0.02
0.12

=5
Uncert.
%
3.04
7.26
6.11
3.17
1.91
1.34
0.78

Model & Data Resolution for = 0, 2


= ++
R =
1.00
0.00
0.00
0.00
0.00
0.00
0.00

0.00
1.00
0.00
0.00
0.00
0.00
0.00

0.00
0.00
1.00
0.00
0.00
0.00
0.00

0.00
0.00
0.00
1.00
0.00
0.00
0.00

0.00
0.00
0.00
0.00
1.00
0.00
0.00

0.00
0.00
0.00
0.00
0.00
1.00
0.00

0.00
0.00
0.00
0.00
0.00
0.00
1.00

1.00
0.00
0.00
0.00
0.00
0.00
0.00

0.00
1.00
0.00
0.00
0.00
0.00
0.00

0.00
0.00
1.00
0.00
0.00
0.00
0.00

0.00
0.00
0.00
1.00
0.00
0.00
0.00

0.00
0.00
0.00
0.00
1.00
0.00
0.00

0.00
0.00
0.00
0.00
0.00
1.00
0.00

0.00
0.00
0.00
0.00
0.00
0.00
1.00

-0.04
0.86
0.11
0.09
0.03
-0.03
-0.01

-0.01
0.11
0.91
-0.01
0.00
0.01
0.00

0.29
0.09
-0.01
0.61
-0.17
0.14
0.07

0.13
0.03
0.00
-0.17
0.92
0.06
0.03

-0.10
-0.03
0.01
0.14
0.06
0.95
-0.02

-0.05
-0.01
0.00
0.07
0.03
-0.02
0.99

0.85
-0.01
0.00
0.00
0.02
-0.05
0.19

-0.01
1.00
0.00
0.01
0.00
-0.03
0.03

0.00
0.00
1.00
0.02
0.00
-0.03
0.02

0.00
0.01
0.02
0.91
0.00
0.17
-0.11

0.02
0.00
0.00
0.00
1.00
0.00
-0.02

-0.05
-0.03
-0.03
0.17
0.00
0.66
0.28

0.19
0.02
0.02
-0.11
-0.02
0.28
0.61

=2
0.78
-0.04
-0.01
0.29
0.13
-0.10
-0.05
48

Model & Data resolution for = 5


0.71
0.00
-0.06
0.36
0.16
-0.13
-0.07

0.00
0.62
0.30
0.09
0.03
-0.04
-0.01

-0.06
0.30
0.75
-0.01
0.01
0.00
-0.01

0.36
0.09
-0.01
0.51
-0.22
0.17
0.09

0.16
0.03
0.01
-0.22
0.90
0.08
0.04

-0.13
-0.04
0.00
0.17
0.08
0.94
-0.03

-0.07
-0.01
-0.01
0.09
0.04
-0.03
0.98

0.66
0.00
0.02
-0.10
0.04
0.05
0.33

0.00
1.00
0.00
0.02
0.00
-0.04
0.03

0.02
0.00
0.99
0.03
0.00
-0.05
0.01

-0.10
0.02
0.03
0.82
0.02
0.29
-0.09

0.04
0.00
0.00
0.02
0.99
-0.01
-0.03

0.05
-0.04
-0.05
0.29
-0.01
0.48
0.28

0.33
0.03
0.01
-0.09
-0.03
0.28
0.47

Given the uncertainty and resolution as above can the linear


formulation Ax = b under discussion contain sufficient
information as to facilitate a retrieval of a reasonably true
solution?
If the above deductions are incorrect, how the efficiency of a
linear formulation and method of inversion can be analysed for
efficiency?
How the small scale problems can be better understood with
the help of mathematical theory?
49

Constrained Optimization
Lagrangian to be minimized is defined here as:
Y= Z Z =

1
0
S= 0
0
0

F% L

1 0
0
0
1 1 0
0
0
1 1 0
0
0 1 1
0
0 0
1

, ,

Sx

Differentiating the Lagrangian with respect to x and


equating to zero, the solution x can be obtained as:
% = F@ F Z q Z T @ r D F@ L Z qT
50

Solutions by the Lagrange multiplier method


b1
2.2518
0.2829
97.36
45.42
1
L2-norm

%W
0.46
0.05
0.2
0.07
0.22
0.55

1 2 xref x = x1
0.46 0.31
0.05 0.19
1 1 0.2 0.16
0.07 0.09
0.22 0.24
0.55 0.48

Ax-b1 b2= b1+e


-0.019 2.2856
0.023 0.2871
0.000 98.8204
0.000 46.1013
0.001
1
0.03

%W
0.30
0.17
0.31
0.04
0.19
0.50

x2 b3= b2+e
0.31 2.3199
0.19 0.2915
0.17 100.3027
0.09 46.7928
0.25 1.0000
0.48

%W
0.30
0.17
0.31
0.04
0.19
0.50

x3
0.32
0.19
0.17
0.09
0.25
0.49

No appreciable improvement in the solution characteristics by


the use of the derivative operator
With the parameter choice of 1 =1 and 2 =1, the errors to the
tune of 1.5% do not alter the regularized solution computed
against a specific and precise solution prior vector (xref)
chosen.
But the departure of the regularized solution from the solution
prior vector and the relation of both to the true solution vector
becomes a subtle issue that needs more detailed deliberations.
51

Changed Prior Vector


b1
2.2518
0.2829
97.36
45.42
1
L2-norm

% W 1 2 xref x = V1
0.46
0.50 0.32
0.05
0.10 0.19
0.2 1 1 0.15 0.15
0.07
0.03 0.09
0.22
0.22 0.25
0.55
0.55 0.48

Ax-b1
-0.007
0.022
0.000
0.000
0.006
0.02

xref
0.40
0.20
0.10
0.08
0.22
0.55

x
Log
0.29
2.468
0.22 0.4782
0.15 110.3399
0.10 32.0829
0.25
1
0.48

xref
0.40
0.20
0.10
0.08
0.22
0.55

W
%:2
-2.46
2.54
1.09
-0.19
0.03

x
Ax-b1
0.34 0.144
0.26 0.070
0.14 12.980
0.05 -13.336
0.31 0.104
0.55

Here the precise solution prior vector has been changed to see
the impact on the solution and it can be found that the
regularized solution did not significantly respond to the prior
solution vector used.
But when a log vector is applied to the same scenario, the
solution showed departure from the unity constraint.
When the data vector used is different from the inverse crime
scenario the solution is perturbed suggesting instability of the
solution despite the LMM implementation
52

Tikhnov versus LMM solutions


L=0
b1
xtikh = x+
2.2518
0.46
0.2829
0.05
97.36
0.20
45.42
0.07
1
0.22
L2-norm
0.55
(L2)2
0.31

L = 0.04678
xtikh
Ax-b
0.25 -0.01
0.23
0.02
0.23
0.00
0.07
0.00
0.22
0.00
0.47 0.023
0.22
0.00

L = 0.18459
xtikh
Ax-b
0.24
0.00
0.24
0.02
0.22
0.00
0.07
0.00
0.22
0.00
0.47 0.016
0.22
0.00

L = 0.001
xtikh
Ax-b
0.39 0.0006
0.10 0.0031
0.22 0.0002
0.06 -0.0008
0.21 -0.0028
0.51
0.00
0.26
0.00

L= 0.01
xtikh Ax-b
0.29 0.002
0.19 0.010
0.25 0.000
0.06 -0.001
0.21 -0.006
0.48 0.01
0.23 0.00

L=1
xtikh Ax-b
0.23 -0.06
0.23 0.03
0.21 0.00
0.08 0.00
0.23 -0.02
0.46 0.07
0.21 0.01

Tikhonov solution vectors are different many elements


differing drastically from the constrained optimization output.
Here the issue comes up:
How to choose between different optimization methods?
Under what circumstances one regularization method is
preferred over the other?

53

Diagnosis for a valid solution x from Ax = b


1.Where does the problem lie, A is a smoothing operator?
When A is a smoothing operator, the forward solution of
equation Ax = b dampens the variability in the vector
space x when transforming it to the vector space b.
Smoothing operators under inversion acts like amplifiers.
The smoothing can also dampen the signals in the input
data b below the noise levels, effectively leading to a loss of
information, with no means of getting it back in an
inversion operation.
2. Is it a problem with the accuracy of the data vector b?
Can enhanced instrumentation and accuracy to b bring in
more quality or efficiency to the data inversion process?
54

Diagnosis for a valid solution x from Ax = b


3. Ill-conditioning of the problem is also a result of the
discretization of the continuous functions in which information
is lost. As the resolution of the discretised function increases,
each component of the solution vector x will have less and less
influence on the data b.
In the inversion problem, the influence of x can be viewed as
the information content of x in each data point bi. Therefore,
as the influence of x decreases, each data point bi will contain
less information about the higher resolution elements of x.
As the information content decreases, it is subjected to higher
and higher amplification to reconstruct the unknowns. This
amplification works on the error in the observations and
kernel as well, creating situations where the error of the
reconstructed values are very much larger than the values
themselves. Thus, a trade-off between the solution resolution
and error exists.
55

Conclusions
1.

2.

3.

4.

5.
6.
56

Small scale inverse problems of the kind encountered in


well log data inversion (Petrophysics) call for adequate
theory to explain the usefulness of the method.
Interpretation of the inverse theory elements and SVD are
domain specific and Petrophysics remains an untouched
area.
Areas like use of the regularization method, L-curve and
applicability of the discrete Picard condition towards the
diagnosis of a valid solution remains to be explored.
Efficiency of Tikhonov regularization and LevenbergMarquardt algorithm to be studied against the linear earth
model used to describe the measurements.
Possibility of developing better alternatives and the relative
merits over the deterministic approach are to be reckoned.
Quantification of the uncertainty of model parameters is an
essential requirement.

57

SVD Analysis Example


UT*b

20.1974 4502.54
-7.1581 1069.07
4.5046 108.86
-0.4225 10.58
1.8436
4.97
0.3201
1.07
0.0004
0.00

UT*b/
Variance
%
%
IUT*b/
iI
i
(
I)^2
Variance
Energy
0.004 20272866.45
0.95
0.004
0.004
-0.007 1142910.66
0.05
0.007
0.006
0.041
11850.72
0.00
0.041
0.037
-0.040
112.02
0.00
0.040
0.036
0.371
24.66
0.00
0.371
0.335
0.299
1.14
0.00
0.299
0.270
0.344
0.00
0.00
0.344
0.311

Ratio to
max value
0.012
0.018
0.111
0.108
1.000
0.806
0.928

Condition number is 4108157 and it is obvious that the


residual cannot be of any indication about the quality of the
solution.
Efficacy of iterative methods also comes into question as the
Picard coefficients are unfavorably distributed against small
singular values.
58

SVD Analysis contd.


Scaling can be resorted to reduce the condition number and
application of a weight matrix derived as the inverse of the error
estimates on data (1.5%) gives a weighted operator of condition
number 427708.
As we are working in an inverse crime scenario where the data
vector has been artificially created from the model, we get the
model x used above as the minimum norm solution.
Adding a noise of mean zero and standard deviation 1 to the
data vector b, the weighted least square solutions depicts
residuals varying from 1.13 to 3.14 for the same solution vector.
In fact, the prior is dominating the solution as the linear
operator is lacking in structure to illuminate the model space
with information retrieved from data space.
No other explanation can be thought of for the Picard
coefficients weighted towards the ill-conditioned model space.
59

Data vector at a depth


Depth
1
2
3
4
5
x1
0.46
0.05
0.20
0.07
0.22

Q
0.46
0.48
0.50
0.50
0.52

C
0.05
0.03
0.03
0.04
0.03

x2
0.48
0.03
0.18
0.07
0.24

K
0.20
0.18
0.16
0.16
0.15
x3
0.50
0.03
0.16
0.05
0.26

M
0.07
0.07
0.05
0.05
0.03

W
0.22
0.24
0.26
0.25
0.27

x4
0.50
0.04
0.16
0.05
0.25

2.65
2.71
2.41
2.79
1

A Transpose
-0.06 55.5 12
-0.02 48 0
0.37 120 105
0.25 55 270
1 189 0

x5
0.52
0.03
0.15
0.03
0.27

N
0.283
0.295
0.301
0.291
0.301

t
97.36
98.89
100.28
98.87
100.98

45.42
43.56
36.3
36.3
30.09

b1
b2
b3
b4
b5
2.2518 2.2224 2.1914 2.2085 2.1745
0.2829 0.2947 0.3011 0.2909 0.3012
97.36 98.89 100.28 98.87 100.98
45.42 43.56
36.3
36.3
30.09
1
1
1
1
1

Inverse Crime Operations Axi = bi and xi = A+bi


60

1
1
1
1
1

b
2.252
2.222
2.191
2.209
2.175

Kernel A from the data


Inverse of Volume Matrix
Observed data
Coefficient Matrix
-22.5 54 -106.5 42 34 2.252 0.283 97.36 45.42 1 2.65 -0.06 55.5 12.00 1
27.5 -96 93.5 42 -66 2.222 0.295 98.89 43.56 1 2.71 -0.02 48 0.00 1
27.5 4 -6.5 -58 34 2.191 0.301 100.28 36.3 1 2.41 0.37 120 105.00 1
-22.5 4 43.5 42 -66 2.209 0.291 98.87 36.3 1 2.79 0.25 55 270.00 1
27.5 -96 193.5 -58 -66 2.175 0.301 100.98 30.09 1 1
1 189 0.00 1
The coefficient matrix is retrieved from observed tool data by
using the inverse of the volume matrix.

61

Additional Examples-1
2.65
-0.06
55.5
12
1

80.449
-12.579
36.092
19.268
66.667

2.71
-0.02
48
0
1

A
2.41
0.37
120
105
1

2.79
0.25
55
270
1

1
1
189
0
1

Unit Free Operator AUF


82.271
73.163
84.699
-4.193
77.568
52.411
31.215
78.037
35.767
0.000
168.593 433.526
66.667
66.667
66.667

x
0.41
0.07
0.22
0.05
0.25

30.358
209.644
122.909
0.000
66.667

Ax = b
2.196
0.318
102.515
41.52
1

Error
0.0329
0.0048
1.5377
0.6228
0.015

bUF
66.66667
66.66667
66.66667
66.66667
66.66667

Error
1
1
1
1
1

VT
-0.25 0.15 0.684 -0.133 0.656 506.15 0.00 0.00 0.00 0.00 -0.11 -0.079 -0.427 -0.875 -0.184
-0.229 0.716 -0.502 -0.38 0.195 0.00 259.24 0.00 0.00 0.00 0.121 0.166 0.218 -0.325 0.897
-0.185 0.483 0.093 0.844 -0.107 0.00 0.00 140.34 0.00 0.00 0.662 0.665 0.119 -0.14 -0.292
-0.896 -0.393 -0.2 0.046 -0.033 0.00 0.00 0.00 14.06 0.00 0.145 -0.455 0.786 -0.311 -0.239
-0.221 0.277 0.482 -0.352 -0.721 0.00 0.00 0.00 0.00 3.12 -0.717 0.563 0.371 -0.113 -0.138
62

Tikhonov Regularization for AUF


b
xtrue
2.196
0.41
0.318
0.07
102.515 0.22
41.52
0.05
1
0.25
L2 norm
b
xtrue
2.196
0.41
0.318
0.07
102.515 0.22
41.52
0.05
1
0.25
L2 norm

L=0
0.41
0.07
0.21
0.05
0.25
0.53
L=5
0.36
0.11
0.24
0.05
0.24
0.51

e
L =1
0.00
0.40
0.00
0.08
0.00
0.22
0.00
0.05
0.00
0.25
0.01
0.53
e
L =10
0.053 0.33
-0.045 0.14
-0.029 0.25
0.005 0.04
0.008 0.24
0.08
0.50

e
0.015
-0.014
-0.012
0.000
0.001
0.023
e
0.079
-0.066
-0.040
0.008
0.012
0.11

L=2
0.38
0.09
0.23
0.05
0.25
0.52
L=30
0.29
0.17
0.26
0.04
0.23
0.49

e
L= 3
0.026 0.37
-0.024 0.10
-0.017 0.23
0.001 0.05
0.003 0.24
0.04
0.52
e
L = 50
0.119 0.28
-0.101 0.18
-0.052 0.26
0.011 0.04
0.017 0.23
0.17
0.48

e
0.04
-0.03
-0.02
0.00
0.01
0.05
e
0.13
-0.11
-0.05
0.01
0.02
0.18

Scaling did show a significant effect on the singular value spectrum


of the operator. Rationale followed in achieving a good scaling is
that the uncertainties in all the elements of A are of the same order.
Quality of the scaling done may be understood by looking at the
norm of the columns of the operator.
63

Interpreting the norm of the columns


Operator
A
AUF

57
113

Norms of the columns


48
159
276
110
224
451

How to interpret the change in the norm?

64

189
254

Tikhonov in both the cases


A: L-Curve Parameters

||Ax-b|| ||x||
||e|| =
0.0468 0.01
0.48
0.22
1
0.06
0.47
0.24
3
0.16
0.46
0.25
5
0.23
0.45
0.25
10
0.34
0.44
0.27
20
0.46
0.43
0.28

x11
0.24
0.22
0.20
0.19
0.17
0.15

Regularized Solution[x]T
x21
x31
x41
0.22
0.24
0.05
0.22
0.22
0.06
0.20
0.21
0.06
0.19
0.21
0.06
0.16
0.21
0.06
0.14
0.21
0.07

x51
0.25
0.27
0.28
0.29
0.30
0.31

A is good in minimizing the residual norm while AUF minimizes


the residual solution x error norm. Norm of x is almost the
same in both. Tikhonov regularization gives approximate
solution x as the unique minimizer of the quadratic cost
function
(contd)
65

F% L

Z % %

i.e. two options are available

Tikhonov in both the cases


AUF: L-Curve Parameters
||Ax-b|| ||x|| ||e|| =
5
0.22 0.51 0.07
10
0.29 0.50 0.11
20
0.41 0.49 0.14
50
0.70 0.48 0.18
100 1.06 0.48 0.20

Regularized Solution[x]T
x11
x21
x31
x41
x51
0.36 0.11 0.24 0.05 0.24
0.33 0.14 0.25 0.04 0.24
0.30 0.16 0.26 0.04 0.23
0.28 0.18 0.26 0.04 0.23
0.26 0.20 0.26 0.04 0.23

(a) Define an upper bound for the solution error norm and
minimize the residual
(b) (b)Limit the residue by choice and minimize the error norm =
||x-x||.

66

Additional Examples-1

67

Additional Examples-1

68

Additional Examples-1

69

You might also like