Morpho Tornic Sand Immuno System

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 15

Immunocomputing

by SVD and morphotronics network


Germano Resconi
1
Dept. of Mathematics and Physics, Catholic University Brescia, I-25121, Italy,
resconi@numerica.it.

Zenon Chaczko
(a)
University of Technology Sydney, NSW, Australia,
zenon@eng.uts.edu.au

Abstract

Immune system of the vertebrates possess the capabilities of "intelligent"


information processing, which include memory, the ability to learn, to
recognize, and to make decisions with respect to unknown situations.
The mathematical formalization of these capabilities (Tarakanov and Dasgupta 2000)
forms the basis of immunocomputing (IC) as a new computing approach that
replicates the principles of information processing by proteins and immune
networks (Tarakanov et al. 2003). This IC approach looks rather constructive as a basis for a new
kind of computing. With the morphotronic or the analogous SVD we can create effective learning
process and create immune memory by the projection operators. Given the immune memory is
possible to recognize and compare antigen in a way to take defence action to eliminate the
dangerous cell.
1. Introduction to SVD

1.1 Definition of the SVD

We assume a familiarity with the basic terminology of linear algebra and refer
the reader to [ Reference of SVD ] for a more complete coverage
We restrict our attention to matrices of real numbers and refer the reader to [ Reference ] for a
discussion of the SVD using complex numbers
Using the superscript T to denote the transpose of a vector or matrix we say two vectors
X and Y are orthogonal if

In two or three dimensional space this simply means that the vectors are perpendicular

Let A be a square matrix such that its columns are mutually orthogonal vectors of length 1 i e

XT X = 1 .

Then A is an orthogonal matrix and


I is the identity matrix To simplify the notation assume that a matrix A has at least as many rows
M as columns N

A singular value decomposition of an M x N matrix A is any factorization of the


form

where U is an M x M orthogonal matrix V is an N x N orthogonal matrix and  is an


M x Ndiagonal matrix with

Furthermore it can be shown that there exist non unique matrices U and V such that

Henceforth we will assume the SVD has such a property The quantities si are called the
singular values of A and the columns of U and V are called the left and right singular
vectors respectively

For example the matrix

has the SVD

We see that the columns of U and V are unit length since

and a simple calculation of dot products will show them to be mutually orthogonal
From the components of the SVD we can determine many properties of the original
matrix

The null space of a matrix A is the set of x for which

Ax = 0

and the range of A is the set of b for which


Ax = b

has a solution for x

Let uj and vj be the columns of U and V respectively Then the decomposition of

If

and vj is in the null space of A, whereas if sj  0

1.2 Properties of the SVD

1.2.1 SVD and Matrix Rank

Fundamental to linear algebra is the notion of rank Numerous theorems begin with the condition
If matrix A is of full rank then the following property holds However if the matrix is rank
deficient ( or nearly so ) then small perturbations of the matrix values from round - off errors or
fuzzy data will yield a matrix which is of full rank Hence determining the rank of a matrix is non
trivial The SVD lends us a practical definition of rank as well as allows us to quantify the notion
of near rank deficiency.

1.2.2 SVD and Linear Independence

Another use of the SVD provides a measure called a condition number which is related to the
measure of linear independence between the column vectors of the matrix The condition number
with respect to the Euclidean norm of a matrix A is

where smin and smax are the largest and smallest singular values of A
Using the condition number we can quantify the independence of the columns of A

1.3 Applications of SVD

1.3.1 Solutions to Linear Equations

Numerous practical problems can be expressed in the language of linear algebra A linear
system involves a set of equations in N variables For example consider the following linear
system
This problem can be expressed in terms of a coefficient matrix A a vector x of variables and a
vector b such that a solution to the linear system Ax = b is an assignment to the values of the vector
x For the above example A x and b are

Using the SVD of A we can determine if a solution exists as well as the general form of the
possible solutions x

then the system Ax = b becomes

Substituting z = VT x and d = UT b we have

z = d (1)

Let Rank ( A ) = k = the number of non zero singular values si


Studying the linear equations of the diagonal system (1) we can determine whether or not there is
a solution A solution exists if and only if dj = 0 whenever sj = 0 or j > N
If k < N then the zj associated with a zero s j can be set to any value and still yield a solution
A general form of the possible solutions can then be expressed in terms of these arbitrary
components of z when transformed back to the original coordinates by x = V z
The condition number of a matrix can also describe the sensitivity of solutions of linear systems to
inaccuracies in the data As an extension of solving linear systems suppose we wish to find a
solution where Ax is approximately equal to b by this we mean the least squares solution x to
minimize

1.3.2 Noisy Signal Filtering

Problems in signal processing often use linear models for signals In ideal noise free condi
tions the measurement data can be arranged in a matrix where the matrix is known to be
rank deficient By this we mean that the signal is assumed to lie in a proper subspace of
Euclidean space However the presence of noise either from rounding error or instrument
error results in a measurement matrix that is often of full rank Usually the models assume
that the error can be separated from the data in that the noise component is that which
lies in a subspace orthogonal to the signal subspace For this reason the SVD is used to
approximate the matrix decomposing the data into an optimal estimate of the signal and
the noise components
Suppose A is the measurement matrix where each column consists of a signal component x and a
noise component n

where each Ci = xi + ni
The vector x representing the signal is known to lie in a rank k subspace though the precise
subspace is not known Therefore let x = Hc for a coefficient vector c and a matrix H whose
columns are the basis vectors of some rank k subspace The least squared error between A and Hc
is minimized by choosing H to be the optimal k rank approximation Ak to A.

Then the k columns of U , correspond to the k largest singular values , span the rank k subspace H.
The resulting error

Using the SVD as above we see that the original data matrix A is decomposed into the orthogonal
components

which corresponds to the orthogonal subspace defi ning the noise components.

1.3.3 SVD and the pseudoinverse

The SVD analysis of a matrix A gives an easy and uniform way of computing its inverse A-1 ,
whenever it exists , or its pseudoinverse A+ , otherwise.
Namely if

Then

The singular values 1 ,…, r are locate on the diagonal of the matrix  , and the reciprocals of the
singular values

are on the diagonal of the n x m matrix + . We know that for the pseudo inverse we have
A+ A+ = A

For the square matrix and A non singular

A+ = A-1

AA+ = U UT

A A+ x = y where y is the projection of x into the subspace of the colons of A

A A+ A = A , A+ A A+ = A+
2. Morphotronic system

We define the morphotronic structure as a network of input and output of q x p matrices where
q  p. In figure 1 we show one morphotronic network.

A Z1 Z3 Z5 B

Z2 Z4

Figure 1 Network of morphotronic system

The graph representation of one module of the morphotronic system is shown in figure 2

A Z B

Figure 2 simple module in the Morphotronic system

Formally the input output between matrices in figure (2) is given by the expression

B=ZA (2)
where

Z = B (AT A)-1 AT (3)


In fact we have

Z A = B (AT A)-1 AT A = B (4)

Proposition 1 :
When q > p for the same data A and B we have many different values of Z for which

B=ZA

Proof :

For Z = Z + LT we have

B = ( Z + LT ) A = Z A + L A

Now when LT A = 0 with LT different from zero we have that

B = ( Z + LT ) A = Z A + LT A = Z A.

So all the operators Z + LT transform A into B.

When we solve the homogeneous equation LT A = 0 , or ( LT A )T = AT L = 0 given A we can


compute the family of the transformations

Z + LT that transform A into B

For example given

The solution of AT L = 0 can be obtained in this way

So le solutions are
Now we can conclude that

AT L = 0 , L T A = 0

And

B = ( Z + LT ) A = Z A

So we have many different Z that change A into B

2.1 Loops and projection operators

Now we study a Loop in figure 3 for the morphotronic network

Z1

A B

Z2

Figure 3 Loop in the morphotronic system

For the equation (3) in the previous case we have

Z1 = B (AT A)-1 AT

Z2 = A (BT B)-1 BT
and

Q1 = Z1 Z2 = B (AT A)-1 AT A (BT B)-1 BT = B ( BT B )-1 BT

We have Q1 B = B , and Q1 is a projection operator with the property


Q12 = B ( BT B )-1 BT B ( BT B )-1 BT = B ( BT B )-1 BT = Q1

In the same loop we have another projection operator Q2 defined in this way

Q2 = Z2 Z1 = A (BT B)-1 BT B (AT A)-1 AT = A ( AT A )-1 AT

Whith Q2 A = A , and

Q22 = A ( AT A )-1 AT A ( AT A )-1 AT = A ( AT A )-1 AT = Q2

2.2 Geometric image of the projection operator

Now we show, in a simple case , the geometric image of the projection operator Q2 for the matrix.

and the vector

we have

and the geometric image of the projection operator is

P3
X = (X1 , X2 , X3 )

A1

S1
Q2X
P1

S2
A2

P2

Figure 4 Projection operator as loop in the morphotronic system

2.3 Another projection operator for the same loop

For
B=ZA
We have

Z = B (BT A)-1 BT

In fact we have

Z A = B (BT A)-1 BT A = B (5)

And in the loop in figure 3 for

Z1 = B (BT A)-1 BT

Z2 = A (AT B)-1 AT
We have the projection operator

Q = Z2 Z1 = A ( AT B )-1 AT B ( BT A )-1 BT = A (BT A)-1 BT

Where

Q A = A , Q2 = A (BT A)-1 BT A (BT A)-1 BT = A (BT A)-1 BT = Q

We have also another projection operator

Q = Z1 Z2 = B ( BT A )-1 BT A ( AT B )-1 AT = B ( AT B )-1 AT

Where

Q B = B , Q2 = B (AT B)-1 AT B (AT B)-1 AT = B (AT B)-1 AT = Q

In conclusion for the same diagram in figure 3 we have four different type of projection operators

3. SVD as a particular case of the Morphotronic System

Given the matrix A , we can built the projection operator

Where A+ is the pseudo inverse of A. Now given the non singular quadratic matrix M , we have

Now when M = V -1 where V is the matrix of the eigenvectors of AT A and -1 is the a diagonal
matrix of the inverse of the square of the eigenvalues of AT A , for the well known properties of the
eigenvectors and eigenvalues we have
For example given

Now we put A M = U , so for M = V -1 we have

Now we have also that

In conclusion with the projection operator , the transformation M we can found by the eigenvalues
and egivenvectros of AT A , the SVD for A. For a more general projection operator as

Q A = A , Q = A (BT A)-1 BT

For A = R B we have
Q = R B ( BT R B )-1 BT

Now for the transfromation U = B M we have

with the change of b into BM the projection operator Q is invariant.. Now we must choose M in a
way that

(BM)T R BM = MT ( BT R B ) M = Identity

Given the eigenvectors matrix V of the matrix BT R B and the diagonal matrix of the sqaure of the
eigenvalues  we put

M = V -1 so we have for the properties of the eigenvalues and eigenvectros

M T ( BT R B ) M = I

Now we show properties of M. In fact with a mateamtical computation we have

So we have the important property

M ( B M )T = M MT BT = ( BT R B )-1 BT = B+

When we put B M = U we have the traditional decomposition or SVD of the pseudo inverse of B
in this way

V -1 UT = M ( B M )T = M MT BT = ( BT R B )-1 BT = B+

In conclusion we have also that because

B+ B = V -1 UT = Identity

We obtain
B = U  VT

That id the SVD of B obtained by the morphotronic definition of the projection operator.

We have also that

In conclusion by V ,  , U we can define the pseudo inverse of B in the projection operator and also
the projection operator. At the reverse by the eigenvalue and eigenvectors of BT R B , we can obtain
the V ,  , U in the SVD. We show that we can write the SVD as a particular case of the
morphotronic transfromation and loop or projection operator.

5 Immunocomputing morphotronics and SVD

5.1 Principal definition

Definition 1.

Cell is a pair V = (f, P), where f is real value f  R ,, whereas

is a point of q-dimensional space: P  Rq , and P lies within unit cube:

max{| p 1 |,..., | p q

Definition 2

The affinity is the distance

between cells

Definition 3

The cell if the following conditions are satisfied


Where

Definition 4

The formal immune network (FIN) is a set of cells W  W0 where W0 is a set of cells that form
the ‘innate immunity” controlled by the cytokine.

Rule 1 ( apoptosis , elimination of redundance)

If cell then remove for the set W

Rule 2 ( autoimmunization or memory )

If cell is nearest to the cell among all cells in W

Whereas we have that is the most affine to but is beyond the recognition
threshold. In this case in the Immuno Computing we add to W.

Definition 5

We define “epitope” any point P of the of the q dimensional space of the cells. In this dsapce we
have the immuno cells W but we have also all the possible cells included the “antigene”.

5.2 pattern recognition in IC by SVD and morphotronic

Given the matrix of the immune cells ( FIN or formal immune network ) in W in this way

Given the matrix of the Training Vectors ( training antigenic ) in this way

We can project the training vectors into the space of the immune cells or FIN by the projection
operator
For three dimension of the q space we show the points of the projections

p3

X2
Z V1

X3
X1

p1

V2

p2

Figure 5 Example of patter recognition of Z in IC by morphotronic loop or projection operator Q

Given the training matrix X of samples , we project the samples in the space of the immune cells or
FIN in a way obtain the learning process for the immune system. When we finish the learning
process we have the mature immune system. Now given a new cell Z we project this cell into the
immune system and we check where is the training projection or the immune cell that is affine to Z.
The immune cell that is affine the projection of Z activate all the immune system to destroy the
antigen Z.

6. Conclusion

In this paper we show that IC create by Tarakanov et al. 2003 is a special application of a more
general theory denoted morphotronic that include all the properties of the SVD.

You might also like