Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Assalamu’alaikum wr.

wb

Peace be upon you, and Allah mercy and blessings.

Good morning. Thank you for the time given to me.

Good morning, ladies and gentlemen. My name is Muhammad Imaduddin, and I’m from Mathematics
Masters student at Diponegoro University. On this occasion I will present our research with the title
"Analysis of the Orthogonal Matching Pursuit algorithm using orthogonal and orthonormal bases on
digital images".

And This Topics that I will discuss include:


1. Background
2. Limitation of Problems and Research Objectives
3. Research Methods
4. Results and Discussion
5. Conclusions

The first discussion is :

Background

In this era, information about digital images is very important. Compressive sensing is one technique to
reduce the rate of image sampling and improve the accuracy of image reconstruction. In image
reconstruction, researchers use the Orthogonal Matching Pursuit (OMP) algorithm because this
algorithm requires the least amount of time in reconstructing images. While the bases used in this
algorithm are Haar, Hadamard, Walsh and any orthogonal bases.

And The next discussion is

Limitation of Problems and Research Objectives

The problem limitation used in this research is that the research is focused only on optimizing the
compression of the compressed signal using the Orthogonal Matching Pursuit algorithm with a base /
dictionary that is orthonormal and orthogonal.

The purpose of this study are:

1. Obtain steps in reconstructing images using the Orthogonal Matching Pursuit algorithm in Matlab.

2. Getting the optimal basis for reconstructing images using the Orthogonal Matching Pursuit algorithm.
Next is the Research Method

After the research methods and research flowcharts have been designed, the next step is to simulate
the results and discuss the research of image reconstruction using the Orthogonal Matching Pursuit
algorithm on an orthogonal and orthonormal basis.

The first step is

Create a MATLAB GUI design that displays the Initial Image, Binary Image, Representation of Binary
Image Matrix and Simulation Results of image reconstruction using OMP.

After the design is made the next step is to choose any RGB image which is then converted into a binary
image and represent the binary image in the form of a square matrix.

The data used in this study is in the form of arbitrary images and then processed into binary images.
Then after the binary image is formed, the next step is to represent the binary image in the form of a
matrix for further reconstruction using the Orthogonal Matching Pursuit algorithm.
1. Input any digital image
Pas Photo Muhammad Imaduddin measuring 4 x 6

RGB Image Pas Photo Didin

2. Change the RGB Image into a Binary Image

Binary Image Pas Photo Didin

3. Represent binary images in the form of square matrices


For example the Matrix produced from binary images is as follows
2 3 1 1

[ ]
A= 4
1
1
5
3
2
3
2
1
2
4
0
4. Perform the blocking and vectoring process in the Matrix
this is the matrix after vectoring process
2
A¿
3
4
5
[]
5. Input Bases that will be used for the reconstruction process using the Orthogonal Matching Pursuit
algorithm.
Following are the bases to be used:
1. Haar Bases
1 1 1 1

[ ]
2 2 2 2
1 1 −1 −1
2 2 2 2
1.414 −1.414
0 0
2 2
1.414 −1.414
0 0
2 2

2. Hadamard Bases
1 1 1 1

[ ]
2 2 2 2
1 −1 1 −1
2 2 2 2
1 1 −1 −1
2 2 2 2
1 −1 −1 1
2 2 2 2

3. Walsh Bases
1 1 1 1

[ ]
2 2 2 2
−1 −1 1 1
2 2 2 2
−1 1 1 −1
2 2 2 2
1 −1 1 −1
2 2 2 2

4. And Any n x n Base that has been normalized by the Gram-Schmidt method
0.1667 −0.2089 0.7723 −0.5763

[ 0.5 0.3989 −0.4598 −0.6161


0.1667 0.8167 0.4371
0.8333 −0.3608 0.0339
0.3378
0.4173
]
6. In the first step, the Haar basis will be chosen for the image reconstruction process using the OMP
algorithm
Calculate the value of y as input of the process of the Orthogonal Matching Pursuit algorithm.
y=H . A
1 1 1 1

[ ]
2 2 2 2
1 1 −1 −1 2
y= 2
1.414
2
0
2
−1.414
2
0
2
0
1.414
0
−1.414
2
[]
. 3
4
5

2 2
7.0000

[ ]
y= −2.000
−0.707
−0.707

7. Reconstruction of the initial vector value (matrix vectoring initial image) with input in the form of
the value of y and Haar basis
7.0000
y=
[ ]
−2.000
−0.707
−0.707
1 1 1 1

[ ]
2 2 2 2
1 1 −1 −1
H= 2 2 2 2
1.414 −1.414
0 0
2 2
1.414 −1.414
0 0
2 2

8. Determine the Biggest Contribution


w=H T . y
1 1
0.707 0

[ ]
2 2
1 1 7.0000
w= 2
1
2
1
2
−1
2
−1
−0.707

0
0

0.707

−0.707
. −2.000
−0.707
−0.707
[ ]
2 2
2.000151 H1

[ ][ ]
4.999985
H
w= 2.999849 = 2
4.000151 H3
H4
From these results obtained the highest contribution is H_4, so we choose the highest, namely H_4.
For example we choose the first highest contribution is on the basis of H_4.
1

So obtained Anew =[ H 4 ]=
2
−1
2
0
−0.707
[]
9. Calculate residual value and Least Square Problem from selected contributions.
r = y− Anew L p
y¿
L p= A +¿.
new
−1
L p= ( ATnew ) ( A new )
[ ] . A Tnew . y lama
Because in the first iteration the chosen contribution is H_4, a member of A_new is a contribution
from H_4:
1

ATnew = [
2
Anew =[ H 4 ]= −1
2
0
−0.707
1
2
[]
−1
2
0 −0.707 ]
7.0000
y lama = −2.000
−0.707
−0.707
y baru =r new
[ ]
So the value of L_p is:
−1
L p= ( ATnew ) ( A new )
[ ] . A Tnew . y lama
−1
1

L p=

[ [ 1
2
−1
2
0 −0.707
2
−1
2
0
−0.707
]
[] . [ 1
2
−1
2 ]
7.0000

[ ]
0 −0.707 . −2.000
−0.707
−0.707
−1
L p= [ 0.999849 ] . [ 4.99985 ]

¿ [ 1.00015102 ] . [ 4.99985 ]
¿ [ 5.00060409 ]
Because this L_p is the coefficient for H_4, we write x_rec as follows.
0
x rec=
[ ] 0
0
5.00060409
Next is calculate the residual value
r = y− Anew L p
1

[ ][ ]
7.0000 2
r = −2.000 − −1 [ 5.00060409 ]
−0.707 2
−0.707 0
−0.707
7.0000 2.500030205

[ ][
r = −2.000 − −2.500302
−0.707
−0.707
4.499698
0
−3.535427
]
[ ]
r = 0.500302
−0.707
2.828427

10. Repeat step 2. Looking for the next highest contribution. Because H_4 has been chosen as the first
highest contribution, the next highest contribution is chosen from the values of H_1, H_2 and H_3
which contribute the most with new y or r.
w=H T . y baru
T
w=[ H 1 H 2 H 3 ] . y baru
T
1 1 1

w= 1
2

2
[ 2
1
2
0.707 −0.707
0 0
2

2
0
0.707
4.499698
−1 . 0.500302
−0.707
2.828427 ][ ]
1 1

[ ][
0.707 0
2 2 4.499698
w=
1
2
1
2
1
2
−1
2
−0.707

0
0 . 0.500302

0.707
−0.707
2.828427
]
2.000151 H1

[
w= 2.999849 = H 2
3.999396 H3
][ ]
11. From these results obtained the highest contribution is H_3, so we choose the highest, namely H_3.
The second highest contribution is on an H_3 basis.
1

So obtained Anew =[ H 3 ]=
2
−1
2
0
0.707
[]
Calculate residual value and Least Square Problem from selected contributions.
r = y− Anew L p
y¿
L p= A +¿.
new
−1
L p= ( ATnew ) ( A new )
[ ] . A Tnew . y lama
Because in the second iteration the selected contribution is H_3, members of A_new are contributions
from H_4 and H_3:
1 1

Anew =[ H 4

1
H 3 ]=

−1
2
−1
2
0 [ ]
2
−1
2
0
−0.707 0.707

A T
new =2
1
2
[
7.0000
2
−1
2
] 0 −0.707

0 0.707

y lama =
[ ]
−2.000
−0.707
−0.707
y baru =r new
So the value of L_p is :
−1
L p= ( ATnew ) ( A new )
[ ] . A Tnew . y lama
−1
1 1
1
L p= 2
1
2 [[ 1
−1
2
−1
2

−1
0 −0.707

0 0.707
2
−1
2
0
2
−1
2
0
][
−0.707 0.707

7.0000
] .

[ 2
1
2
2
−1
2
0.999849 0.000151
0 −0.707

4.9998
0.707 ][ ]
. −2.000
−0.707
−0.707
−1
L p= [
0.000151 0.999849
.
4.002 ] [ ]
1.000151 −0.000151 4.9998
¿ [
−0.000151 1.000151
.
4.002 ][ ]
5.000
¿ [ ]
4.000

Because this L_p is the coefficient for H_4 and H_3, we write x_rec as follows.
0
x rec= 0
4.000
5.000
[ ]
Next is calculate the residual value
r = y− Anew L p
1 1

[ ][ ]
7.0000 2 2
−1 5.000
r = −2.000
−0.707
− −1
2 2 4.000 [ ]
−0.707 0 0
−0.707 0.707
7.0000 4.500

[ ][ ]
r = −2.000 − −4.500
−0.707 0.000
−0.707 −0.707
2.500

[ ]
r = 2.500
−0.707
0.000
12. Repeat step 2. Looking for the next highest contribution. Because H_4 and H_3 have been chosen as
the first and second highest contributions, the next highest contribution is only chosen from the values
of H_1 and H_2 that contribute the most with new y or r.
w=H T . y baru
T
w=[ H 1 H 2 ] . y baru
T
1 1

w=

1
[ ][
2
1
2

0
2
1
2
0.707 −0.707
0
1
2.500
. 2.500
−0.707
0.000

2.500
]
w=
2
1
2
[ 2
1
2
0.707 0

−0.707 0
.
2.500
−0.707
0.000
][ ]
w=
2.0002
[2.9998 ]=[ HH ] 1

13. In the third iteration, atom H_2 gives the highest contribution, with a value of 2.9998 (ignoring
negative values or using only absolute values).
So the value of H_2 is used as the value of A_new, then the value of A_new will be used to calculate
the Least Square Problem value ( L p )
1 1 1

Anew =[ H 4 H 3

1 −1
H 2 ]=

[
2
−1
2
0
2
−1
2
0
−0.707 0.707
2
1
2
−0.707
0
]
[ ]
0 −0.707
2 2
T 1 −1
Anew = 0 0.707
2 2
1 1
−0.707 0
2 2

14. Because atoms H_2, H_3 and H_4 have been selected. Just like before, we look for residual values
and Least Square Problems from selected contributions.
r = y− Anew L p
y¿
L p= A +¿.
new
−1
L p= ( ATnew ) ( A new )
[ ] . A Tnew . y lama
So the value of L_p is:
−1
1 −1 1 1 1

[ ][ ]
0 −0.707
2 2 2 2 2
1 −1 −1 −1 1
L p= 0 0.707 .
2 2 2 2 2
1 1 0 0 −0.707
−0.707 0
2 2 −0.707 0.707 0

1 −1

[ ][
0 −0.707
2 2 7.0000
1
2
1
2
−1
2
1
2
0

−0.707
0.707 . −2.000

0
−0.707
−0.707
]
−1
0.999849 0.000151 0 5.000

[
L p= 0.000151 0.999849
0 0
0
0.999849
1.000151 −0.000151 0
][ ]
5.000
4.000
3.000

[
L p= −0.000151 1.000151
0 0
0
][ ]
4.000
1.000151 3.000
5.000

[ ]
L p= 4.000
3.000
0.000

[ ]
x rec= 3.000
4.000
5.000
Then the residual value is:
r = y lama− A new L p
1 1 1

[ ][ ]
7.0000 2 2 2
5.000
r= −2.000
−0.707
−0.707
− −1
2
0
−1
2
0
1
2
−0.707
[ ]
4.000
3.000
−0.707 0.707 0
7.0000 6.000151
r=
[ ][ ]
−2.000
−0.707
−0.707
− −2.99985
−2.1212
−0.707
0.9998

[ ]
r = 0.9998
1.4142
0.000

15. Repeat step 2. Looking for the next highest contribution. Since H_4, H_3 and H_2 have been selected
as the first, second and third highest contributions, the next highest contribution is only chosen from
the value of H_1 that contributes with the new y or r.
w=H T . y baru
T
w=[ H 1 ] . y baru
T
1

w= 1

[ ][
2

2
0.707
0
0.9998
. 0.9998
1.4142
0.000

0.9998
]
w=
1
2[
w=[ 1.999698 ]
1
0.707 0 . 0.9998
2 1.4142
0.000
] [ ]
16. In the fourth iteration, atom H_1 gives the highest contribution, with a value of 1.999698 (ignoring
negative values or using only absolute values).
So the value of H_1 is used as the value of A_new, then the value of A_new will be used to calculate
the Least Square Problem (L_p)
1 1 1 1

Anew =[ H 4 H 3

1 −1
H2

0
[
H 1 ]=
2
−1
2
0
2
−1
2
0
−0.707 0.707
2
1
2
2
1
2
−0.707 0.707
0 0
]
[ ]
−0.707
2 2
1 −1
0 0.707
Anew = 2
T 2
1 1
−0.707 0
2 2
1 1
0.707 0
2 2
17. Finding residual values and Least Square Problems from selected contributions.
r = y− Anew L p
y¿
L p= A +¿.
new
−1
L p= ( ATnew ) ( A new )
[ ] . A Tnew . y lama
So the value of L_p is:
−1
1 −1

[ ][ ]]
0 −0.707
2 2 1 1 1 1
1 −1 2 2 2 2
0 0.707
2 2 −1 −1 1 1
L p= .
1 1 2 2 2 2
−0.707 0
2 2 0 0 −0.707 0.707
1 1 −0.707 0.707 0 0
0.707 0
2 2

1 −1
0

[ ]
−0.707
2 2
1 −1 7.0000
2
1
2
1
2
1
2
1
0

−0.707

0.707
0.707

0
. −2.000
−0.707
−0.707
[ ]
2 2
−1
0.999849 0.000151 0 0 4.9998
L p=
0
0 [
0.000151 0.999849
0
0
0 0
0.999849 0.000151
0.000151 0.999849
1.00015105 −0.00015 0 0 4.9998
][ ] 4.0002
2.9998
2.0002

5.000
[
L p= −0.00015 1.00015105
0
0
0
0
0 0 . 4.0002
1.00015105 −0.00015 2.9998
−0.00015 1.00015105 2.0002
][ ]
L p= 4.000
3.000
2.000
2.000
[ ]
x rec= 3.000
4.000
5.000
[ ]
Then the residual value is:
r = y lama− A new L p
1 1 1 1

[ ][ ][
7.0000 2 2 2 2 5.000
r = −2.000
−0.707
−0.707
− −1
2
0
−1
2
0
−0.707 0.707
1
2
1
2
−0.707 0.707
0 0
4.000
3.000
2.000
]
7.0000 7.0000

[ ][ ]
r = −2.000 − −2.000

0
−0.707 −0.707
−0.707 −0.707

r= 0
0
0
[]
18. Because the residual value is zero then the iteration is stopped

Do the same for reconstruction on the basis of Hadamard, Walsh, and orthogonalization based on the
Gram-Schmidt method.

The following is a comparison table of Pas Photo image reconstruction using the Orthogonal Matching
Pursuit algorithm based on Haar, Hadamard, Walsh, and Gram-Schmidt Orthogonalisation
From some of the above experiments it can be concluded that if the base used is orthonormal, the
resulting iteration will be faster than the non orthonormal basis. Also in the selection of bases other than
orthonormal, the number of base columns must be equal to the number of rows in the initial matrix. And
if the basis is orthonormal and orthogonal then the iteration will be faster than the base that only meets
orthonormal properties
Analysis of the Orthogonal Matching Pursuit algorithm using orthonormal and orthogonal bases in binary
image reconstruction shows the fact that in reconstructing using the OMP algorithm, the base used must
have orthonormal properties. If the image reconstruction, the base used is nxn sized matrix, the basis used
in reconstructing images is orthonormal or orthogonal, because if an nxn matrix is orthonormal then the
matrix will also be orthogonal, this can be seen in the theory of Chapter 2. Whereas if using a base that
does not meet orthonormal or orthogonal properties, the reconstruction process can still be done using any
basis, but the base used has the same column size with the matrix length after the vectoring process. For
example, in the initial matrix A_nxn or A_mxn and after the blocking and vectoring process to be a
matrix in size A_mx1, the base used is a matrix size B_nxm with the number of columns in the base used
is the same as m in matrix A.
Thus my presentation, thank you for your attention, ladies and gentlemen, more or less apologize,
wassalamualaikum wr wb

You might also like